List of works
Conference proceeding
Constrained Edge AI Deployment: Fine-Tuning vs. Distillation for LLM Compression
Published 10/2025
MILCOM 2025 - 2025 IEEE Military Communications Conference (MILCOM), 1500 - 1505
IEEE Military Communications Conference (MILCOM), 10/06/2025–10/10/2025, Los Angeles, California, USA
Modern foundational models are often compressed via a combination of structured pruning and re-training to meet the strict compute, memory, and connectivity constraints of edge deployments. While state-of-the-art (SoTA) pruning schemes target the entire Transformer, we adopt a simple, layer-wise L 2 -norm pruning on only the multi-layer perceptron (MLP) blocks as a fixed baseline. Our focus is not on achieving maximal compression, but on isolating the impact of the re-training loss function: (i) L2-norm Pruning with Cross-Entropy Fine-Tuning (L2PFT), which relies on labeled data, versus (ii) L2-norm Pruning with KL-Divergence Self-Distillation (L2PSD), which utilizes only teacher logits without requiring labeled data. We evaluate both pipelines on the OLMo2-7B-SFT model for CommonsenseQA, suitable for intermittent or denied connectivity scenarios typical of edge networks. Under identical pruning schedules, L2PSD achieves comparable or superior test accuracy to L2PFT, indicating that the choice of loss function has a significant impact on compressed model recovery in resource-constrained environments.
Conference proceeding
Neurosymbolic AI Transfer Learning Improves Network Intrusion Detection
Published 10/2025
MILCOM IEEE Military Communications Conference, 496 - 501
IEEE Military Communications Conference (MILCOM), 10/06/2025–10/10/2025, Los Angeles, California, USA
Transfer learning is commonly utilized in various fields such as computer vision, natural language processing, and medical imaging due to its impressive capability to address sub-tasks and work with different datasets. However, its application in cybersecurity has not been thoroughly explored. In this paper, we present an innovative neurosymbolic AI framework designed for network intrusion detection systems, which play a crucial role in combating malicious activities in cybersecurity. Our framework leverages transfer learning and uncertainty quantification. The findings indicate that transfer learning models, trained on large and well-structured datasets, outperform neural-based models that rely on smaller datasets, paving the way for a new era in cybersecurity solutions.
Preprint
Neurosymbolic AI Transfer Learning Improves Network Intrusion Detection
Posted to a preprint site 09/13/2025
Transfer learning is commonly utilized in various fields such as computer vision, natural language processing, and medical imaging due to its impressive capability to address subtasks and work with different datasets. However, its application in cybersecurity has not been thoroughly explored. In this paper, we present an innovative neurosymbolic AI framework designed for network intrusion detection systems, which play a crucial role in combating malicious activities in cybersecurity. Our framework leverages transfer learning and uncertainty quantification. The findings indicate that transfer learning models, trained on large and well-structured datasets, outperform neural-based models that rely on smaller datasets, paving the way for a new era in cybersecurity solutions.
Poster
Date presented 08/2025
Summer Undergraduate Research Program (SURP) Symposium, 08/2025, University of West Florida, Pensacola, Florida
Poster
Feature Engineering for Systematically Assessing Clinical Trial Feasibility
Date presented 08/2025
Summer Undergraduate Research Program (SURP) Symposium, 08/2025, University of West Florida, Pensacola, Florida
Many clinical trials are not completed due to a failure to assess their feasibility properly. Current methods are subjective and have no evidence of their accuracy. Developing an evidence-based method to assess feasibility could improve funding decisions and trial completion rates.
Poster
Date presented 08/2025
Summer Undergraduate Research Program (SURP) Symposium, 08/2025, University of West Florida, Pensacola, Florida
Preprint
Posted to a preprint site 06/04/2025
Network Intrusion Detection Systems (NIDS) play a vital role in protecting digital infrastructures against increasingly sophisticated cyber threats. In this paper, we extend ODXU, a Neurosymbolic AI (NSAI) framework that integrates deep embedded clustering for feature extraction, symbolic reasoning using
XGBoost, and comprehensive uncertainty quantification (UQ) to enhance robustness, interpretability, and generalization in NIDS. The extended ODXU
incorporates score-based methods (e.g., Confidence Scoring, Shannon Entropy) and metamodel-based techniques, including SHAP values and Information Gain, to assess the reliability of predictions. Experimental results on the CIC-IDS-2017 dataset show that ODXU outperforms traditional neural models across six evaluation metrics, including classification accuracy and false omission rate. While transfer learning has seen widespread adoption in fields such as computer vision and natural language processing, its potential in cybersecurity has not been thoroughly explored. To bridge this gap, we develop a transfer learning strategy that enables the reuse of a pre-trained ODXU model on a different dataset. Our ablation study on ACI-IoT-2023 demonstrates that the optimal transfer configuration involves reusing the pre-trained autoencoder, retraining the clustering module, and fine-tuning the XGBoost classifier, and outperforms traditional neural models when trained with as few as 16,000 samples (approximately 50% of the training data). Additionally, results show that metamodel-based UQ methods consistently outperform score-based approaches on both datasets.
Preprint
Constrained Edge AI Deployment: Fine-Tuning vs Distillation for LLM Compression
Posted to a preprint site 05/13/2025
Modern foundational models are often compressed via a combination of structured pruning and re-training to meet the strict compute, memory, and
connectivity constraints of edge deployments. While state-of-the-art pruning schemes target the entire Transformer, we adopt a simple, layer-wise
L2-norm pruning on only the MLP blocks as a fixed baseline. Our focus is not on achieving maximal compression, but on isolating the impact of the re-training loss function: (i) Fine-tuning with Cross- Entropy (L2PFT), which requires labeled data, versus (ii) Self-Distillation with KL-divergence, which leverages only teacher logits (no labels) (L2PSD). We evaluate both pipelines on the OLMo2- 7B-SFT model for CommonsenseQA suitable for intermittent or denied connectivity scenarios typical of edge networks. Under identical pruning schedules, KL-based distillation matches or exceeds CE fine-tuning in test accuracy, demonstrating that, even with a basic MLP-only pruning, the choice of loss function materially affects compressed model recovery in
resource-constrained environments.
Conference proceeding
Academic Advising Chatbot Powered with AI Agent
Published 05/08/2025
ACMSE 2025: Proceedings of the 2025 ACM Southeast Conference, 195 - 202
ACMSE 2025: 2025 ACM Southeast Conference, 04/24/2025–04/26/2025, Cape Girardeau, Missouri, USA
Academic advising plays a crucial role in fostering student success. However, challenges such as limited advisor availability can hinder effective support. Generative AI, particularly AI-powered chatbots, offers the potential to enhance student advising in higher education by providing personalized guidance. These technologies help college students find the information and resources needed to create degree plans aligned with their academic goals. This research introduces ARGObot, an intelligent advising system that facilitates student navigation of university policies through automated interpretation of the student handbook as its primary knowledge base. ARGObot enhances accessibility to critical academic policies and procedures, supporting incoming students' success through personalized guidance. Our system integrates a multifunctional agent enhanced by a Large Language Model (LLM). The architecture employs multiple external tools to enhance its capabilities: a Retrieval-Augmented Generation (RAG) system accesses verified university sources; email integration facilitates Human-in-the-Loop (HITL) interaction; and a web search function expands the system's knowledge base beyond predefined constraints. This approach enables the system to provide contextually relevant and verified responses to various student queries. This architecture evolved from our initial implementation based on Gemini 1 Pro, which revealed significant limitations due to its lack of agent-based functionality, resulting in hallucination issues and irrelevant responses. Subsequent evaluation demonstrated that our enhanced version, integrating GPT-4 with the text-embedding-ada-002 model, achieved superior performance across all metrics. This paper also presents a comparative analysis of both implementations, highlighting the architectural improvements and their impact on system performance.
Conference proceeding
Design of the Hotelling \mathrm^ Control Chart for Anomaly Detection in IoT Systems
Published 04/25/2025
Proceedings of IEEE Southeastcon, 1390 - 1396
IEEE Southeastcon: SoutheastCon, 2025 (PRT), 03/22/2025–03/30/2025, Concord (Charlotte), North Carolina, USA
IoT systems such as smart homes or cities capture large amounts of data in real time using a wide range of interconnected sensor devices linked to edge nodes and their associated cloud services. In such systems, sensors must operate continuously and reliably in their deployed environment to deliver meaningful data to the users. As time passes, sensors may fail or become less effective in collecting raw data, resulting in erroneous data being delivered to the end user, who relies on their accuracy for decision-making. In this paper, we study detection methods of anomalies in sensor data based on Hotelling's T^{2} statistic. In simulation experiments, we calculate the lowest possible shift in data streams that are detectable by our proposed methods using sensor data from a smart home. Our results show that the proposed method successfully identifies the low shift in the means, given a confidence level with an accuracy of 90%. These results are based on data and do not assume a specific distribution, which opens up new potential applications for IoT data and anomaly detection. Additionally, some low shifts can be difficult to detect visually, but our proposed method will be able to detect those low shifts.