The emergence of adversarial examples has confirmed the significant vulnerabilities of deep learning models. By introducing subtle perturbations, these examples can easily cause a drastic decline or even complete failure in the performance of well-trained models. Recent studies have indicated that these perturbations pose not only theoretical threats but also substantial risks and impacts in real-world scenarios. This study focuses on the issue of physical adversarial attacks against object detection models, providing a clear and precise definition of the concept. From multiple perspectives of target detection systems, including faces, pedestrians, vehicles, and traffic signboards, we delved deeply into and summarized a series of physical adversarial attack methods and their characteristics against object detection network models in recent years. Finally, we discussed the severe challenges faced by physical adversarial attacks, particularly the limitations of adversarial training and its inadequacies in practical applications. Based on current research progress, we envision possible future development directions and vast application prospects in this field, aiming to provide valuable references and insights for enhancing the security and robustness of deep learning models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.