Towards Interpretable Adversarial Examples via Sparse Adversarial Attack
Fudong Lin, Jiadong Lou, Hao Wang, Brian Jalaian and Xu Yuan
Machine Learning and Knowledge Discovery in Databases. Research Track, pp.92-110
Lecture Notes in Computer Science
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2025) (Porto, Portugal, 09/15/2025–09/19/2025)
Sparse attacks are to optimize the magnitude of adversarial perturbations for fooling deep neural networks (DNNs) involving only a few perturbed pixels (i.e., under the l0 $$l_{0}$$ constraint), suitable for interpreting the vulnerability of DNNs. However, existing solutions fail to yield interpretable adversarial examples due to their poor sparsity. Worse still, they often struggle with heavy computational overhead, poor transferability, and weak attack strength. In this paper, we aim to develop a sparse attack for understanding the vulnerability of DNNs by minimizing the magnitude of initial perturbations under the l0 $$l_{0}$$ constraint, to overcome the existing drawbacks while achieving a fast, transferable, and strong attack to DNNs. In particular, a novel and theoretical sound parameterization technique is introduced to approximate the NP-hard l0 $$l_{0}$$ optimization problem, making directly optimizing sparse perturbations computationally feasible. Besides, a novel loss function is designed to augment initial perturbations by maximizing the adversary property and minimizing the number of perturbed pixels simultaneously. Extensive experiments are conducted to demonstrate that our approach, with theoretical performance guarantees, outperforms state-of-the-art sparse attacks in terms of computational overhead, transferability, and attack strength, expecting to serve as a benchmark for evaluating the robustness of DNNs. In addition, theoretical and empirical results validate that our approach yields sparser adversarial examples, empowering us to discover two categories of noises, i.e., “obscuring noise” and “leading noise”, which will help interpret how adversarial perturbation misleads the classifiers into incorrect predictions. Our code is available at https://github.com/fudong03/SparseAttack.
Files and links (3)
pdf
Towards Interpretable Adversarial Examples via Sparse Adversarial Attack10.17 MBDownloadView
supplementalThis is the official github repository of "Towards Interpretable Adversarial Examples via Sparse Adversarial Attack", accepted by ECML-PKDD 2025. Open
Related links
Details
Title
Towards Interpretable Adversarial Examples via Sparse Adversarial Attack
Publication Details
Machine Learning and Knowledge Discovery in Databases. Research Track, pp.92-110
Resource Type
Conference proceeding
Conference
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2025) (Porto, Portugal, 09/15/2025–09/19/2025)