21 April 2021 Sparse adversarial attack based on ℓ q-norm for fooling the face anti-spoofing neural networks
Linxi Yang, Jiezhi Yang, Mingjie Peng, Jiatian Pi, Zhiyou Wu, Xunyi Zhou, Jueyou Li
Author Affiliations +
Abstract

Neural networks are vulnerable to various adversarial perturbations added to the input. Highly sparse adversarial perturbations are difficult to identify, which is especially dangerous to network security. Previous research has shown that ℓ0-norm has good sparsity but is challenging to solve. We use ℓq-norm to approach ℓ0-norm and propose a new white-box algorithm to generate adversarial examples aiming at minimizing ℓq distance of the original image. Meanwhile, we extend the adversarial attack to facial anti-spoofing task in the field of face recognition security. This extension enables us to generate sparse and unobservable facial attack perturbation. To increase the diversity of the data set, we make a new data set of real and fake facial images containing images produced by various latest spoofing methods. Extensive experiments show that our proposed method can effectively generate a sparse perturbation and successfully mislead the classifier in multi-classification tasks and facial anti-spoofing tasks.

© 2021 SPIE and IS&T 1017-9909/2021/$28.00 © 2021 SPIE and IS&T
Linxi Yang, Jiezhi Yang, Mingjie Peng, Jiatian Pi, Zhiyou Wu, Xunyi Zhou, and Jueyou Li "Sparse adversarial attack based on ℓ q-norm for fooling the face anti-spoofing neural networks," Journal of Electronic Imaging 30(2), 023023 (21 April 2021). https://doi.org/10.1117/1.JEI.30.2.023023
Received: 26 August 2020; Accepted: 18 March 2021; Published: 21 April 2021
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Computer security

Facial recognition systems

Detection and tracking algorithms

RGB color model

Data modeling

Eye

Back to Top