4 November 2022 Delving deep into adversarial perturbations initialization on adversarial examples generation
Cong Hu, Peng Wan, Xiao-Jun Wu, He-Feng Yin
Author Affiliations +
Abstract

Though deep neural networks (DNNs) have achieved great success in the computer vision and pattern recognition community, studies show that they are vulnerable to adversarial examples. Adversarial perturbations, usually imperceptible to humans, can be added to benign images to form adversarial examples. Lots of gradient-based methods have been proposed to compute adversarial perturbations. However, these methods compute adversarial perturbations without initialization. A proper initialization of the perturbations is critical to the robustness of adversarial examples. To this end, we propose several adversarial perturbation initialization (API) methods for generating robust adversarial examples. Our work comprehensively analyzes the effect of adversarial perturbations initialization on several white-box attack methods. We conduct experiments on three benchmark datasets: MNIST, Cifar10, and ImageNet. Experimental results show that API improves the attack success rates of adversarial examples. The average recognition accuracy of the target model is reduced by about 3.4% when API is used to generate adversarial examples.

© 2022 SPIE and IS&T
Cong Hu, Peng Wan, Xiao-Jun Wu, and He-Feng Yin "Delving deep into adversarial perturbations initialization on adversarial examples generation," Journal of Electronic Imaging 31(6), 063004 (4 November 2022). https://doi.org/10.1117/1.JEI.31.6.063004
Received: 25 May 2022; Accepted: 14 October 2022; Published: 4 November 2022
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top