List of works
Journal article
Neurosymbolic AI for network intrusion detection systems: A survey
First online publication 08/26/2025
Journal of information security and applications, 94, 104205
Current data-driven AI approaches in Network Intrusion Detection System (NIDS) face challenges related to high resource consumption, high computational demands, and limited interpretability. Moreover, they often struggle to detect unknown and rapidly evolving cyber threats. This survey explores the integration of Neurosymbolic AI (NeSy AI) into NIDS, combining the data-driven capabilities of Deep Learning (DL) with the structured reasoning of symbolic AI to address emerging cybersecurity threats. The integration of NeSy AI into NIDS demonstrates significant improvements in both the detection and interpretation of complex network threats by exploiting the advanced pattern recognition typical of neural processing and the interpretive capabilities of symbolic reasoning. In this survey, we categorize the analysed NeSy AI approaches applied to NIDS into logic-based and graph-based representations. Logic-based approaches emphasize symbolic reasoning and rule-based inference. On the other hand, graph-based representations capture the relational and structural aspects of network traffic. We examine various NeSy systems applied to NIDS, highlighting their potential and main challenges. Furthermore, we discuss the most relevant issues in the field of NIDS and the contribution NeSy can offer. We present a comparison between the main XAI techniques applied to NIDS in the literature and the increased explainability offered by NeSy systems.
Journal article
Decentralized Bayesian learning with Metropolis-adjusted Hamiltonian Monte Carlo
Published 08/2023
Machine learning, 112, 8, 2791 - 2819
Federated learning performed by a decentralized networks of agents is becoming increasingly important with the prevalence of embedded software on autonomous devices. Bayesian approaches to learning benefit from offering more information as to the uncertainty of a random quantity, and Langevin and Hamiltonian methods are effective at realizing sampling from an uncertain distribution with large parameter dimensions. Such methods have only recently appeared in the decentralized setting, and either exclusively use stochastic gradient Langevin and Hamiltonian Monte Carlo approaches that require a diminishing stepsize to asymptotically sample from the posterior and are known in practice to characterize uncertainty less faithfully than constant step-size methods with a Metropolis adjustment, or assume strong convexity properties of the potential function. We present the first approach to incorporating constant stepsize Metropolis-adjusted HMC in the decentralized sampling framework, show theoretical guarantees for consensus and probability distance to the posterior stationary distribution, and demonstrate their effectiveness numerically on standard real world problems, including decentralized learning of neural networks which is known to be highly non-convex.
Journal article
Reducing classifier overconfidence against adversaries through graph algorithms
Published 07/01/2023
Machine learning, 112, 7, 2619 - 2651
In this work we show that deep learning classifiers tend to become overconfident in their answers under adversarial attacks, even when the classifier is optimized to survive such attacks. Our work draws upon stochastic geometry and graph algorithms to propose a general framework to replace the last fully connected layer and softmax output. This framework (a) can be applied to any classifier and (b) significantly reduces the classifier's overconfidence in its output without much of an impact on its accuracy when compared to original adversarially-trained classifiers. Its relative effectiveness increases as the attacker becomes more powerful. Our use of graph algorithms in adversarial learning is new and of independent interest. Finally, we show the advantages of this last-layer softmax replacement over image tasks under common adversarial attacks.
Journal article
Enhancing Resilience in Mobile Edge Computing under Processing Uncertainty
Published 03/2023
IEEE journal on selected areas in communications, 41, 3, 659 - 674
Task offloading is a powerful tool in Mobile Edge Computing (MEC). However, in many practical scenarios, the number of required processing cycles of a task is unknown beforehand and only known until its completion. This poses a serious challenge in making offloading decisions as the number of processing cycles is a key parameter to determine whether a task's deadline can be met. To cope with such processing uncertainty, we formulate a Chance-Constrained Program (CCP) that offers probabilistic guarantees to task deadlines. The goal is to minimize energy consumption for the users while meeting the probabilistic task deadlines. We assume that only the means and variances of the random processing cycles are available, without any knowledge of distribution functions. We employ a powerful tool called Exact Conic Reformulation (ECR) that reformulates probabilistic deadline constraints into deterministic ones. Subsequently, we design an online solution called EPD (Energy-minimized solution with Probabilistic Deadline guarantee) for periodic scheduling and schedule updates during run-time. We show that EPD can address the processing uncertainty with probabilistic deadline guarantees while minimizing the users' energy consumption.
Journal article
Maximizing Energy Efficiency with Channel Uncertainty under Mutual Interference
Published 10/2022
IEEE transactions on wireless communications, 21, 10, 8476 - 8488
We study the problem of channel uncertainty on wireless transmissions from different users with mutual interference. Specifically, the channel gains from the transmitters to the receivers are available only through their mean and covariance rather than complete distributions. Our goal is to maximize the energy efficiency among all transmitter-receiver pairs while guaranteeing their capacity requirements. For this problem, we employ chance-constrained programming (CCP), which allows occasional violation of target capacity threshold as long as the probability of such violation is below a small tolerable constant (risk level). We propose a solution based on a novel reformulation technique that converts the original CCP into a deterministic optimization problem without relaxation errors. Then the deterministic optimization problem is approximated into a Geometric Program (GP) based on tight polynomial approximations, which can be solved optimally. We prove that our proposed solution achieves near-optimal performance with polynomial time complexity.
Journal article
Minimizing AoI in a 5G-Based IoT Network under Varying Channel Conditions
Published 10/2021
IEEE internet of things journal, 8, 19, 14543 - 14558
The Age of Information (AoI) is a key metric to measure the freshness of information for IoT applications. Most of the existing analytical models for AoI are overly idealistic and do not capture state-of-the-art transmission technologies such as 5G as well as channel dynamics in both frequency and time domains. In this article, we present Kronos, a real-time 5G-compliant scheduler that minimizes AoI for IoT data collection. Kronos is designed to cope with highly dynamic channel conditions. Its main function is to perform RB allocation and to select the modulation and coding scheme for each source node based on channel conditions, with the objective of minimizing long-term AoI. To meet the stringent real-time requirement for 5G, we develop a GPU-based implementation of Kronos on commercial off-the-shelf Nvidia GPUs. Through extensive experimentation, we show that Kronos can find near-optimal solutions under submillisecond time scale. To the best of our knowledge, this is the first real-time AoI scheduler that is 5G compliant.
Journal article
Maximize Spectrum Efficiency in Underlay Coexistence With Channel Uncertainty
Published 04/2021
IEEE/ACM transactions on networking, 29, 2, 764 - 778
We consider an underlay coexistence scenario where secondary users (SUs) must keep their interference to the primary users (PUs) under control. However, the channel gains from the PUs to the SUs are uncertain due to a lack of cooperation between the PUs and the SUs. Under this circumstance, it is preferable to allow the interference threshold of each PU to be violated occasionally as long as such violation stays below a probability. In this article, we employ Chance-Constrained Programming (CCP) to exploit this idea of occasional interference threshold violation. We assume the uncertain channel gains are only known by their mean and covariance. These quantities are slow-changing and easy to estimate. Our main contribution is to introduce a novel and powerful mathematical tool called Exact Conic Reformulation (ECR), which reformulates the intractable chance constraints into tractable convex constraints. Further, ECR guarantees an equivalent reformulation from linear chance constraints into deterministic conic constraints without the limitations associated with Bernstein Approximation, on which our research community has been fixated on for years. Through extensive simulations, we show that our proposed solution offers a significant improvement over existing approaches in terms of performance and ability to handle channel correlations (where Bernstein Approximation is no longer applicable).
Journal article
Uncertain Context: Uncertainty Quantification in Machine Learning
Published 12/22/2019
The AI magazine, 40, 4, 40 - 48
Machine learning and artificial intelligence will be deeply embedded in the intelligent systems humans use to automate tasking, optimize planning, and support decision-making. However, many of these methods can be challenged by dynamic computational contexts, resulting in uncertainty in prediction errors and overall system outputs. Therefore, it will be increasingly important for uncertainties in underlying learning-related computer models to be quantified and communicated. The goal of this article is to provide an accessible overview of computational context and its relationship to uncertainty quantification for machine learning, as well as to provide general suggestions on how to implement uncertainty quantification when doing statistical learning. Specifically, we will discuss the challenge of quantifying uncertainty in predictions using popular machine learning models. We present several sources of uncertainty and their implications on statistical models and subsequent machine learning predictions.
Journal article
Reaping the Benefits of Dynamic TDD in Massive MIMO
Published 03/2019
IEEE systems journal, 13, 1, 117 - 124
Recent advances in massive multiple-input multiple-output (MIMO) communication show that equipping base stations (BSs) with large antenna arrays can significantly improve the performance of cellular networks. Massive MIMO has the potential to mitigate the interference in the network and enhance the average throughput per user. On the other hand, dynamic time-division duplexing (TDD), which allows neighboring cells to operate with different uplink (UL) and downlink (DL) subframe configurations, is a promising enhancement for the conventional static TDD. Compared with static TDD, dynamic TDD can offer more flexibility to accommodate various UL and DL traffic patterns across different cells, but may result in additional interference among cells transmitting in different directions. Based on the unique characteristics and properties of massive MIMO and dynamic TDD, we propose a marriage of these two techniques, i.e., to have massive MIMO address the limitation of dynamic TDD in macrocell (MC) networks. Specifically, we advocate that the benefits of dynamic TDD can be fully extracted in MC networks equipped with massive MIMO, i.e., the BS-to-BS interference can be effectively removed by increasing the number of BS antennas. We provide detailed analysis using random matrix theory to show that the effect of the BS-to-BS interference on UL transmissions vanishes as the number of BS antennas per user grows infinitely large. Last but not least, we validate our analysis by numerical simulations.
Journal article
On the integration of SIC and MIMO DoF for interference cancellation in wireless networks
Published 10/2018
Wireless networks, 24, 7, 2357 - 2374
Recent advances in MIMO degree-of-freedom (DoF) models allowed MIMO research to penetrate the networking community. Independent from MIMO, successive interference cancellation (SIC) is a powerful physical layer technique used in multi-user detection. Based on the understanding of the strengths and weaknesses of MIMO DoF and SIC, we propose to have DoF-based interference cancellation (IC) and SIC help each other so that (i) precious DoF resources can be conserved through the use of SIC and (ii) the stringent SINR threshold criteria can be met through the use of DoF-based IC. In this paper, we develop the necessary mathematical models to realize the two ideas in a multi-hop wireless network. Together with scheduling and routing constraints, we develop a cross-layer optimization framework with joint DoF IC and SIC. By applying the framework on a throughput maximization problem, we find that SIC and DoF IC can indeed work in harmony and achieve the two ideas that we propose.