List of works
Book chapter
Adversarial Machine Learning: A New Threat Paradigm for Next-generation Wireless Communications
Published 2023
AI, Machine Learning and Deep Learning, 35 - 52
The application of machine learning (ML) in the wireless domain, commonly referred to as radio frequency machine learning (RFML), has grown strongly in recent years to solve various problems in the areas of wireless communications, networking, and signal processing. Machine learning has found applications in wireless security such as user equipment (UE) authentication, intrusion detection, and detection of conventional attacks such as jamming and eavesdropping. On the other hand, wireless systems are vulnerable to machine-learning-based security and privacy attack vectors that have recently been considered in other modalities such as computer vision and natural language processing (NLP). Adversarial machine learning has emerged as a major threat for wireless systems, given its potential to disrupt the machine learning process. As the wireless medium is open and shared, it provides new means for adversaries to manipulate the testing and training processes of machine learning algorithms. To that end, novel techniques are needed to safeguard the wireless attack surface against adversarial machine learning.
Book chapter
Toward Safe Decision-Making via Uncertainty Quantification in Machine Learning
Published 11/02/2021
Systems Engineering and Artificial Intelligence, 379 - 399
The automation of safety-critical systems is becoming increasingly prevalent as machine learning approaches become more sophisticated and capable. However, approaches that are safe to use in critical systems must account for uncertainty. Most real-world applications currently use deterministic machine learning techniques that cannot incorporate uncertainty. In order to place systems in critical infrastructure, we must be able to understand and interpret how machines make decisions. This need is so that they can provide support for human decision-making, as well as the potential to operate autonomously. As such, we highlight the importance of incorporating uncertainty into the decision-making process and present the advantages of Bayesian decision theory. We showcase an example of classifying vehicles from their acoustic recordings, where certain classes have significantly higher threat levels. We show how carefully adopting the Bayesian paradigm not only leads to safer decisions, but also provides a clear distinction between the roles of the machine learning expert and the domain expert.
Book chapter
Re-orienting toward the Science of the Artificial: Engineering AI Systems
Published 11/02/2021
Systems Engineering and Artificial Intelligence, 149 - 174
AI-enabled systems are becoming more pervasive, yet system engineering techniques still face limitations in how AI systems are being deployed. This chapter provides a discussion of the implications of hierarchical component composition and the importance of data in bounding AI system performance and stability. Issues of interoperability and uncertainty are introduced and how they can impact emergent behaviors of AI systems are illustrated through the presentation of a natural language processing (NLP) system used to provide similarity comparisons of organizational corpora. Within the bounds of this discussion, we examine how the concepts from Design science can introduce additional rigor to AI complex system engineering.
Book chapter
Trinity: Trust, Resilience and Interpretability of Machine Learning Models
Published 09/22/2021
Game Theory and Machine Learning for Cyber Security, 317 - 333
Despite the remarkable strides over the last decade in the performance of machine learning techniques, their applications are typically limited to nonadversarial benign environments. The use of deep learning in applications such as biometric recognition, and intrusion detection, require them to operate in adversarial environments. But the overwhelming empirical studies and theoretical results have shown that these methods are extremely fragile and susceptible to adversarial attacks. The rationale for why these methods make the decisions they do are also notoriously difficult to interpret; understanding such rationale may be crucial for the aforementioned applications. In this chapter, we discuss the connections between these related challenges, and describe a novel integrated approach, Trinity (
T
rust,
R
esilience and
IN
terpretabil
ITY
), for analyzing these models.
Book chapter
Computational Intelligence in Uncertainty Quantification for Learning Control and Differential Games
Published 2021
Handbook of Reinforcement Learning and Control, 385 - 418
Multi-dimensional uncertainties often modulate modern system dynamics in a complicated fashion. They lead to challenges for real-time control, considering the significant computation load needed to evaluate them in real-time decision processes. This chapter describes the use of computationally effective uncertainty evaluation methods for adaptive optimal control, including learning control and differential games. Two uncertainty evaluation methods are described, the multivariate probabilistic collocation method (MPCM) and its extension the MPCM-OFFD that integrates the MPCM with the orthogonal fractional factorial design (OFFD) to break the curse of dimensionality. These scalable uncertainty evaluation methods are then developed for reinforcement learning (RL)-based adaptive optimal control. Stochastic differential games, including the two-player zero-sum and multi-player nonzero-sum games, are formulated and investigated. Nash equilibrium solutions for these games are found in real time using the MPCM-based on-policy/off-policy RL methods. Real-world applications on broad-band long-distance aerial networking and strategic air traffic management demonstrate the practical use of MPCM- and MPCM-OFFD-based learning control for uncertain systems.
Book chapter
Homology as an Adversarial Attack Indicator
Published 2021
Adversary-Aware Learning Techniques and Trends in Cybersecurity, 167 - 196
In this paper we show how classical topological information can be automated with machine learning to lessen the threat of an adversarial attack. This paper is a proof of concept which lays the groundwork for future research in this area.