In Virtual Reality (VR), adversarial attack remains a significant se-curity threat. Most deep learning-based methods for physical and digital adversarial attacks focus on enhancing attack performance by crafting adversarial examples that contain large printable distortions that are easy for human observers to identify. However, attackers rarely impose limitations on the naturalness and comfort of the appearance of the generated attack image, resulting in a no-ticeable and unnatural attack. To address this challenge, we propose a framework to incorporate style transfer to craft adversarial inputs of natural styles that exhibit minimal detectability and maximum natural appearance, while maintaining superior attack capabilities.
Related links
Details
Title
Diffusion Attack
Publication Details
2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp.975-976
Resource Type
Conference proceeding
Conference
Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (Orlando, Florida, USA, 03/16/2024–03/21/2024)
Publisher
Institute of Electrical and Electronics Engineers (IEEE)