List of works
Conference proceeding
CyberAI: Knowledge Area Frameworks for Cybersecurity Programs in the Age of Artificial Intelligence
Published 11/06/2025
Proceedings of the 26th ACM Annual Conference on Cybersecurity & Information Technology Education, 24 - 28
Annual ACM Conference on Cybersecurity and Information Technology Education: ACM SIGCITE 2025 , 11/06/2025–11/08/2025, Sacramento, California, USA
The CyberAI Programs of Study (PoS) represent a pioneering step in integrating Artificial Intelligence (AI) with cybersecurity education. Sponsored by the U.S. National Science Foundation (NSF) and developed in collaboration with the U.S. National Security Agency's (NSA) National Centers of Academic Excellence in Cybersecurity (NCAE-C), the CyberAI initiative (www.towson.edu/cyberai) aims to produce a workforce adept in both cybersecurity skills and AI competencies. This paper presents the knowledge areas produced in collaboration with 200+ individuals, with two distinct programs of study – SecureAI, securing the lifecycle of AI, and AICyber – using AI tools and techniques in cybersecurity. A review highlighting the evolution of cybersecurity educational standards and the growing necessity of interdisciplinary AI integration in higher education is presented. Further, this paper outlines the development and validation processes for new Knowledge Units (KUs) supporting these programs, presents findings from pilot implementations, and discusses a validation framework aligned with the U.S. National Institute of Standards and Technology (NIST) NICE Framework and the U.S. DoD Cyber Workforce Framework (DCWF) standards.
Conference proceeding
A Novel Approach to Fine-tune BERT using Non-Text Features for Enhanced Ransomware Detection
Published 09/06/2025
2025 3rd International Conference on Artificial Intelligence, Blockchain, and Internet of Things (AIBThings) September 06 – 07, 2025 Michigan, USA CONFERENCE PROCEEDINGS
International Conference on Artificial Intelligence, Blockchain, and Internet of Things (AIBThings), 09/06/2025–09/07/2025, Mt Pleasant, Michigan, USA
The growing complexity and volume of ransomware attacks demand advanced detection techniques that can effectively model dependencies within high-dimensional data. Traditional machine learning methods often struggle to capture nuanced relationships among features in such cybersecurity datasets. To address this problem, we propose a novel technique that transforms structured, non-linguistic data into descriptive natural language formats. This conversion facilitates the tailored refinement of a Bidirectional Encoder Representations from Transformers (BERT) architecture with optimized parameters. By leveraging BERT's multi-head self-attention mechanism, our method embeds non-textual ransomware data into semantic textual data so that multi-heads can make relationships and dependencies of tokens in different perspectives to transform instances to comprehensive latent representations where interfeature dependencies are effectively present. This allows BERT to understand contextual relevance among tokens, leading to superior classification performance. Our evaluation demonstrates that the resulting model shows dominant performance, surpassing other advanced solutions, achieving a classification accuracy of 99.21%, surpassing ensemble models (99.0%) and LSTM-based approaches (98.5%). The important finding of this novel approach is that one-third of the data points have been used to outperform other existing works. It highlights its potential and adaptability in cybersecurity domains whenever a text dataset is absent to use the natural language context for the model.
Conference proceeding
On the Design and Visualization of Connected Vehicle Security Metrics
Published 04/16/2025
Proceedings of the Third International Conference on Advances in Computing Research (ACR’25), 1346, 358 - 374
International Conference on Advances in Computing Research (ACR’25)
The rapid advancement of connected and autonomous vehicles created new challenges for security and safety professionals. The sophistication of vehicle communication systems, located externally and internally, provides an added complexity to the issue. In security parlance, this is an expansion of the attack surface on vehicles. These challenges prompted the enhancement of existing and the development of new safety and security standards initiated by government, industry, and trade organizations. These initiatives clearly underscore the need to examine the state of connected vehicle security and develop effective security metrics. As a major component of continuous improvement, quantitative and qualitative measures must be devised to be able to make a full appreciation of the process. This paper builds upon previous research on connected vehicle security metrics, offers new metrics, and proposes visualization systems to enhance their utilization.
Preprint
Adaptive Additive Parameter Updates of Vision Transformers for Few-Shot Continual Learning
Posted to a preprint site 04/11/2025
Integrating new class information without losing previously acquired knowledge remains a central challenge in artificial intelligence, often referred to as catastrophic forgetting. Few-shot class incremental learning (FSCIL) addresses this by first training a model on a robust dataset of base classes and then incrementally adapting it in successive sessions using only a few labeled examples per novel class. However, this approach is prone to overfitting on the limited new data, which can compromise overall performance and exacerbate forgetting. In this work, we propose a simple yet effective novel FSCIL framework that leverages a frozen Vision Transformer (ViT) backbone augmented with parameter-efficient additive updates. Our approach freezes the
pre-trained ViT parameters and selectively injects trainable weights into the self-attention modules via an additive update mechanism. This design updates only a small subset of parameters to accommodate new classes without sacrificing the representations learned during the base session. By fine-tuning a limited number of parameters, our method preserves the generalizable features in the frozen ViT while reducing the risk of overfitting. Furthermore, as most parameters remain fixed, the model avoids overwriting previously learned knowledge when small novel data batches are introduced. Extensive experiments on benchmark datasets demonstrate that our approach yields state-of-the-art performance compared to baseline FSCIL methods.
Book chapter
Towards the Generation of Learning Objects with Generative Artificial Intelligence
Published 03/30/2025
Applied Cognitive Computing and Artificial Intelligence, 2251, 343 - 355
This conference paper was published in the proceedings for CSCE 2024.
This paper describes ongoing research on the use of Generative Artificial Intelligence (GenAI) in generating learning objects. Learning Objects are digital or non-digital artifacts, which can be used, re-used or referenced to augment or enhance the learning process. Examples of these are presentation slides, images, text, surveys, quizzes, and hands-on exercises. The unprecedented availability and capability of GenAI tools in recent years brings us to consider how their technical capacities and abilities can bring about effective and useful learning objects. We first explore the published literature to survey work that has been reported in the field of applied GenAI to generate learning objects. Next, we provide a review of their technical features and closely look at the distinctive features of the tools used in various GenAI models. The focus of this research is to develop a method of utilizing freely available GenAI tools to expedite the generation of learning objects and to evaluate their effectiveness. Specifically, we seek to optimize the utilization of these AI-generated learning objects for active-learning applications and learning best practices.
Journal article
Published 2025
IEEE access, 13, 1
As networks continue to expand and become more interconnected, the need for novel malware detection methods becomes more pronounced. Traditional security measures are increasingly inadequate against the sophistication of modern cyber attacks. Deep Packet Inspection (DPI) has been pivotal in enhancing network security, offering an in-depth analysis of network traffic that surpasses conventional monitoring techniques. DPI not only examines the metadata of network packets, but also dives into the actual content being carried within the packet payloads, providing a comprehensive view of the data flowing through networks. While the integration of advanced deep learning techniques with DPI has introduced modern methodologies into malware detection and network traffic classification, state-of-the-art supervised learning approaches are limited by their reliance on large amounts of annotated data and their inability to generalize to novel, unseen malware threats. To address these limitations, this paper leverages the recent advancements in self-supervised learning (SSL) and few-shot learning (FSL). Our proposed self-supervised approach trains a transformer via SSL to learn the embeddings of packet content, including payload, from vast amounts of unlabeled data by masking portions of packets, leading to a learned representation that generalizes to various downstream tasks. Once the representation is extracted from the packets, they are used to train a malware detection algorithm. The representation obtained from the transformer is then used to adapt the malware detector to novel types of attacks using few-shot learning approaches. Our experimental results demonstrate that our method achieves classification accuracies of up to 94.76% on the UNSW-NB15 dataset and 83.25% on the CIC-IoT23 dataset.
Preprint
Posted to a preprint site 12/11/2024
Backdoor attacks pose a critical threat by embedding hidden triggers into inputs, causing models to misclassify them into target labels. While extensive
research has focused on mitigating these attacks in object recognition models through weight fine-tuning, much less attention has been given to detecting
backdoored samples directly. Given the vast datasets used in training, manual inspection for backdoor triggers is impractical, and even state-of-the-art
defense mechanisms fail to fully neutralize their impact. To address this gap, we introduce a groundbreaking method to detect unseen backdoored images during both training and inference. Leveraging the transformative success of prompt tuning in Vision Language Models (VLMs), our approach trains learnable text prompts to differentiate clean images from those with hidden backdoor triggers. Experiments demonstrate the exceptional efficacy of this method, achieving an impressive average accuracy of 86% across two renowned datasets for detecting unseen backdoor triggers, establishing a new standard in backdoor defense.
Conference proceeding
Towards Novel Malicious Packet Recognition: A Few-Shot Learning Approach
Published 10/28/2024
MILCOM IEEE Military Communications Conference, 847 - 852
MILCOM 2024: IEEE Military Communications Conference, 10/28/2024–11/01/2024, Washington, District of Columbia (DC), USA
As the complexity and connectivity of networks increase, the need for novel malware detection approaches becomes imperative. Traditional security defenses are becoming less effective against the advanced tactics of today's cyberattacks. Deep Packet Inspection (DPI) has emerged as a key technology in strengthening network security, offering detailed analysis of network traffic that goes beyond simple metadata analysis. DPI examines not only the packet headers but also the payload content within, offering a thorough insight into the data traversing the network. This study proposes a novel approach that leverages a large language model (LLM) and few-shot learning to accurately recognizes novel, unseen malware types with few labels samples. Our proposed approach uses a pretrained LLM on known malware types to extract the embeddings from packets. The embeddings are then used alongside few labeled samples of an unseen malware type. This technique is designed to acclimate the model to different malware representations, further enabling it to generate robust embeddings for each trained and unseen classes. Following the extraction of embeddings from the LLM, few-shot learning is utilized to enhance performance with minimal labeled data. Our evaluation, which utilized two renowned datasets, focused on identifying malware types within network traffic and Internet of Things (IoT) environments. Our approach shows promising results with an average accuracy of 86.35% and F1-Score of 86.40% on different malware types across the two datasets.
Conference proceeding
A Transformer-Based Framework for Payload Malware Detection and Classification
Published 05/29/2024
2024 IEEE World AI IoT Congress (AIIoT), 105 - 111
IEEE World AI IoT Congress (AIIoT), 05/29/2024–05/31/2024, Seattle, Washington, USA
As malicious cyber threats become more sophisticated in breaching computer networks, the need for effective intrusion detection systems (IDSs) becomes crucial. Techniques such as Deep Packet Inspection (DPI) have been introduced to allow IDSs analyze the content of network packets, providing more context for identifying potential threats. IDSs traditionally rely on using anomaly-based and signature-based detection techniques to detect unrecognized and suspicious activity. Deep learning techniques have shown great potential in DPI for IDSs due to their efficiency in learning intricate patterns from the packet content being transmitted through the network. In this paper, we propose an accurate DPI algorithm based on transformers adapted for the purpose of detecting malicious traffic with a classifier head. Transformers learn the complex content of sequence data and generalize them well to similar scenarios thanks to their self-attention mechanism. Our proposed method uses the raw payload bytes that represent the packet contents and is deployed as man-in-the-middle. The payload bytes are used to detect malicious packets and classify their types. Experimental results on the UNSW-NB15 and CIC-IOT23 datasets demonstrate that our transformer-based model is effective in distinguishing malicious from benign traffic in the test dataset, attaining an average accuracy of 79% using binary classification and 72% on the multi-classification experiment, both using solely payload bytes.
Conference proceeding
Open Platform Infrastructure for Industrial Control Systems Security
Published 03/2024
Proceedings of the Second International Conference on Advances in Computing Research (ACR’24), 233 - 243
International Conference on Advances in Computing Research (ACR’24), 06/03/2024–06/05/2024, IE University, Madrid, Spain
The introduction of Docker containers ushered the emergence of microservices to facilitate efficient ways to deploy and manage containerized applications. Digital Twins in Industrial Control Systems (ICS) has enabled advances in the test and evaluation of those systems in a low-cost and non-disruptive manner. In this paper, we present our work on advancing the security of Industrial Control Systems through a four-pronged approach: i) provide a safe training infrastructure for ICS security; ii) present an effective avenue for ICS security testing without operational disruption; iii) implement ICS digital twins to enable ICS security training; and iv) facilitate the design, implementation, and evaluation of ICS security tools. To realize these objectives, we propose the utilization of Open Platform Infrastructure (OPI) with Docker technologies to deploy virtualized Programmable Logic Controllers (PLCs), also known as softPLC, and Human Machine Interfaces (HMIs) that can emulate or act as digital twins of ICS. Further, we describe several docker containers instantiated from Dockerfiles to emulate typical Information Technology (IT) and Operation Technology (OT) networks to illustrate the viability and affordability of such implementations for teaching, learning, and testing of ICS security.