Though deep neural networks (DNNs) have achieved great success in the computer vision and pattern recognition community, studies show that they are vulnerable to adversarial examples. Adversarial perturbations, usually imperceptible to humans, can be added to benign images to form adversarial examples. Lots of gradient-based methods have been proposed to compute adversarial perturbations. However, these methods compute adversarial perturbations without initialization. A proper initialization of the perturbations is critical to the robustness of adversarial examples. To this end, we propose several adversarial perturbation initialization (API) methods for generating robust adversarial examples. Our work comprehensively analyzes the effect of adversarial perturbations initialization on several white-box attack methods. We conduct experiments on three benchmark datasets: MNIST, Cifar10, and ImageNet. Experimental results show that API improves the attack success rates of adversarial examples. The average recognition accuracy of the target model is reduced by about 3.4% when API is used to generate adversarial examples. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one