List of works
Conference proceeding
Towards Interpretable Adversarial Examples via Sparse Adversarial Attack
Published 2026
Machine Learning and Knowledge Discovery in Databases. Research Track, 92 - 110
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2025), 09/15/2025–09/19/2025, Porto, Portugal
Sparse attacks are to optimize the magnitude of adversarial perturbations for fooling deep neural networks (DNNs) involving only a few perturbed pixels (i.e., under the l0 $$l_{0}$$ constraint), suitable for interpreting the vulnerability of DNNs. However, existing solutions fail to yield interpretable adversarial examples due to their poor sparsity. Worse still, they often struggle with heavy computational overhead, poor transferability, and weak attack strength. In this paper, we aim to develop a sparse attack for understanding the vulnerability of DNNs by minimizing the magnitude of initial perturbations under the l0 $$l_{0}$$ constraint, to overcome the existing drawbacks while achieving a fast, transferable, and strong attack to DNNs. In particular, a novel and theoretical sound parameterization technique is introduced to approximate the NP-hard l0 $$l_{0}$$ optimization problem, making directly optimizing sparse perturbations computationally feasible. Besides, a novel loss function is designed to augment initial perturbations by maximizing the adversary property and minimizing the number of perturbed pixels simultaneously. Extensive experiments are conducted to demonstrate that our approach, with theoretical performance guarantees, outperforms state-of-the-art sparse attacks in terms of computational overhead, transferability, and attack strength, expecting to serve as a benchmark for evaluating the robustness of DNNs. In addition, theoretical and empirical results validate that our approach yields sparser adversarial examples, empowering us to discover two categories of noises, i.e., “obscuring noise” and “leading noise”, which will help interpret how adversarial perturbation misleads the classifiers into incorrect predictions. Our code is available at https://github.com/fudong03/SparseAttack.
Conference proceeding
Constrained Edge AI Deployment: Fine-Tuning vs. Distillation for LLM Compression
Published 10/2025
MILCOM 2025 - 2025 IEEE Military Communications Conference (MILCOM), 1500 - 1505
IEEE Military Communications Conference (MILCOM), 10/06/2025–10/10/2025, Los Angeles, California, USA
Modern foundational models are often compressed via a combination of structured pruning and re-training to meet the strict compute, memory, and connectivity constraints of edge deployments. While state-of-the-art (SoTA) pruning schemes target the entire Transformer, we adopt a simple, layer-wise L 2 -norm pruning on only the multi-layer perceptron (MLP) blocks as a fixed baseline. Our focus is not on achieving maximal compression, but on isolating the impact of the re-training loss function: (i) L2-norm Pruning with Cross-Entropy Fine-Tuning (L2PFT), which relies on labeled data, versus (ii) L2-norm Pruning with KL-Divergence Self-Distillation (L2PSD), which utilizes only teacher logits without requiring labeled data. We evaluate both pipelines on the OLMo2-7B-SFT model for CommonsenseQA, suitable for intermittent or denied connectivity scenarios typical of edge networks. Under identical pruning schedules, L2PSD achieves comparable or superior test accuracy to L2PFT, indicating that the choice of loss function has a significant impact on compressed model recovery in resource-constrained environments.
Conference proceeding
Neurosymbolic AI Transfer Learning Improves Network Intrusion Detection
Published 10/2025
MILCOM IEEE Military Communications Conference, 496 - 501
IEEE Military Communications Conference (MILCOM), 10/06/2025–10/10/2025, Los Angeles, California, USA
Transfer learning is commonly utilized in various fields such as computer vision, natural language processing, and medical imaging due to its impressive capability to address sub-tasks and work with different datasets. However, its application in cybersecurity has not been thoroughly explored. In this paper, we present an innovative neurosymbolic AI framework designed for network intrusion detection systems, which play a crucial role in combating malicious activities in cybersecurity. Our framework leverages transfer learning and uncertainty quantification. The findings indicate that transfer learning models, trained on large and well-structured datasets, outperform neural-based models that rely on smaller datasets, paving the way for a new era in cybersecurity solutions.
Conference proceeding
Neuro-Symbolic Integration for Open Set Recognition in Network Intrusion Detection
Published 01/01/2025
AIxIA 2024 – Advances in Artificial Intelligence, 15450, 50 - 63
International Conference of the Italian Association for Artificial Intelligence, 11/25/2024–11/28/2024, Bolzano, Italy
Open Set Recognition (OSR) addresses the challenge of classifying inputs into known and unknown categories, a crucial task where labeling is often prohibitively expensive or incomplete. This is particularly vital in applications like Network Intrusion Detection Systems (NIDS), where OSR is used to identify novel, previously unknown attacks. We propose a neuro-symbolic integration approach that combines deep learning and symbolic methods, enhancing deep embedding for clustering with custom loss functions and leveraging XGBoost's decision tree algorithms. Our methodology not only robustly addresses the identification of previously unknown attacks in NIDS but also effectively manages scenarios involving covariance shift. We demonstrate the efficacy of our approach through extensive experimentation, achieving an AUROC of 0.99 in both contexts. This paper presents a significant step forward in OSR for network intrusion detection by integrating deep and symbolic learning to handle unforeseen challenges in dynamic environments.
Conference proceeding
Mitigating Large Vision-Language Model Hallucination at Post-hoc via Multi-agent System
Published 11/08/2024
Proceedings of the AAAI Symposium Series, 4: AI Trustworthiness and Risk Assessment for Challenging Contexts (ATRACC) - Short Papers, 110 - 113
The Association for the Advancement of Artificial Intelligence’s 2024 Fall Symposium, 11/07/2024–11/09/2024, Arlington, Virginia, USA
This paper addresses the critical issue of hallucination in Large Vision-Language Models (LVLMs) by proposing a novel multi-agent framework. We integrate three post-hoc correction techniques: self-correction, external feedback, and agent debate, to enhance LVLM trustworthiness. Our approach tackles key challenges in LVLM hallucination, including weak visual encoders, parametric knowledge bias, and loss of visual attention during inference. The framework employs a Plug-in LVLM as the base model to reduce its hallucination, a Large Language Model (LLM) for guided refinement, external toolbox models for factual grounding, and an agent debate system for consensus-building. While promising, we also discuss potential limitations and technical challenges in implementing such a complex system. This work contributes to the ongoing effort to create more reliable and trustworthy multimodal multi-agent systems.
Conference proceeding
Uncertainty-Quantified Neurosymbolic AI for Open Set Recognition in Network Intrusion Detection
Published 10/28/2024
MILCOM IEEE Military Communications Conference, 13 - 18
MILCOM 2024, 10/28/2024–11/01/2024, Washington, DC, USA
Network Intrusion Detection Systems (NIDS) are crucial for safeguarding networks by detecting and classifying malicious traffic in real-time. This paper presents a novel neurosymbolic artificial intelligence (NSAI) approach for enhancing NIDS which we call ODXU, which combines neural networks with symbolic reasoning to improve classification accuracy, particularly for challenging classes. Additionally, we introduce uncertainty quantification techniques-Confidence Scoring, Shannon Entropy, and post-hoc Uncertainty Metamodeling-to enhance the reliability of the NIDS. Our experimental results demonstrate that our NSAI model, coupled with post-hoc Uncertainty Meta-modeling, outperforms traditional methods, providing superior detection accuracy and robust uncertainty estimates.
Conference proceeding
A Neuro-Symbolic Artificial Intelligence Network Intrusion Detection System
Published 07/29/2024
ICCCN 2024 2024 33rd International Conference on Computer Communications and Networks (ICCCN): July 29 – July 31, 2024: Final Program
International Conference on Computer Communications and Networks (ICCCN), 07/29/2024–07/31/2024, Kailua-Kona, Hawaii, USA
Ever-changing cyber threats require strong and flexible network security solutions. This paper suggests a new method to improve the performance of detecting both known and unknown attacks using a neuro-symbolic artificial intelligence (NSAI) network intrusion detection system (NIDS). Deep neural networks (DNN) learn complex network data patterns, which create a detailed overview of cyber-attack characteristics. Symbolic logic integration into the DNN allows for model training guidance by applying penalties when the DNN fails to differentiate between malicious and benign network traffic. This improves our model's adaptability to new attacks and overcomes traditional signature-based NIDS limitations. By testing our NSAI NIDS on a large cyber dataset that includes novel attack scenarios, we show that it delivers an improvement in how accurately it detects attacks compared to traditional DNN methods. While our system maintains its high accuracy in recognizing known attacks, it outperforms conventional NIDS in discovering unknown attacks. This work improves cybersecurity by introducing a new way to detect both known and unknown network intrusions by combining DNNs with symbolic logic.
Conference proceeding
Neurosymbolic AI in Cybersecurity: Bridging Pattern Recognition and Symbolic Reasoning
Published 11/2023
MILCOM IEEE Military Communications Conference, 268 - 273
IEEE Military Communications Conference (MILCOM): MILCOM 2023, 10/30/2023–11/03/2023, Boston, Massachusetts, USA
In the face of escalating cyber threats, traditional security measures often fall short, prompting the need for more advanced and interpretable solutions. Neurosymbolic AI, which synergistically combines the pattern recognition capabilities of neural networks with the explicit reasoning of symbolic systems, presents a promising avenue in this realm. This paper offers a comprehensive exploration of Neurosymbolic AI and its potential in enhancing Intrusion Detection Systems (IDS). By integrating data-driven learning with structured reasoning, Neurosymbolic AI promises more robust, adaptive, and transparent cybersecurity solutions. However, challenges such as model interpretability, data requirements, and adaptability in dynamic threat landscapes persist. This paper provides an overview of these challenges, emphasizing the transformative potential of Neurosymbolic AI in fortifying the cybersecurity domain.
Conference proceeding
Published 08/2023
2023 IEEE Conference on Artificial Intelligence (CAI), 36 - 37
IEEE Conference on Artificial Intelligence (CAI), 06/05/2023–06/06/2023, Santa Clara, California, USA
Robustness against distribution shifts is crucial for object detection models in real-world applications. In this study, we evaluate the performance of four state-of-the-art models against natural perturbations, retrain them with synthetic perturbations using the AugLy augmentation package, and assess their improved performance against natural perturbations. Our empirical ablation study focuses on the brightness perturbation modality using the COCO 2017 and ExDARK datasets. Our findings suggest that synthetic perturbations can effectively improve model robustness against real-world distribution shifts, providing valuable insights for deploying robust object detection models in real-world scenarios.
Conference proceeding
Published 06/2023
CVIPPR '23: Proceedings of the 2023 Asia Conference on Computer Vision, Image Processing and Pattern Recognition, 7
CVIPPR 2023: 2023 Asia Conference on Computer Vision, Image Processing and Pattern Recognition, 04/28/2023–04/30/2023, Phuket, Thailand
Robustness against real-world distribution shifts is crucial for the successful deployment of object detection models in practical applications. In this paper, we address the problem of assessing and enhancing the robustness of object detection models against natural perturbations, such as varying lighting conditions, blur, and brightness. We analyze four state-of-the-art deep neural network models, Detr-ResNet-101, Detr-ResNet-50, YOLOv4, and YOLOv4-tiny, using the COCO 2017 dataset and ExDark dataset. By simulating synthetic perturbations with the AugLy package, we systematically explore the optimal level of synthetic perturbation required to improve the models’ robustness through data augmentation techniques. Our comprehensive ablation study meticulously evaluates the impact of synthetic perturbations on object detection models’ performance against real-world distribution shifts, establishing a tangible connection between synthetic augmentation and real-world robustness. Our findings not only substantiate the effectiveness of synthetic perturbations in improving model robustness, but also provide valuable insights for researchers and practitioners in developing more robust and reliable object detection models tailored for real-world applications.