Title |
Transfer-based Adversarial Attack using Camera Viewpoint Transformation |
Authors |
김희선(Hee-Seon Kim) ; 권명준(Myung-Joon Kwon) ; 변준영(Junyoung Byun) ; 김창익(Changick Kim) |
DOI |
https://doi.org/10.5573/ieie.2022.59.7.53 |
Keywords |
Adversarial attack; Transferability; Transfer-based adversarial attack; Adversarial example; Camera viewpoint transformation |
Abstract |
An adversarial attack is a technique to mislead convolutional neural networks into wrong predictions by adding human-imperceptible noises to clean images. When access to a target model is prohibited, the attacker should rely on the transferability of adversarial examples. Transferability refers to the fundamental characteristic of neural networks that an adversarial example crafted from one network is likely to fool other structurally different networks as well. In this setting, the adversary generates adversarial examples using a local source model and expects these images to remain adversarial for an unknown target model. This paper proposes a Camera Viewpoint Transformation to improve the transferability of adversarial examples. This is based on the assumption that adversarial examples should remain adversarial from any point of view. The Camera Viewpoint Transformation in the attack prevent the generated images from overfitting to the source model and hence create stronger adversarial images. Consequently, the Camera Viewpoint Transformation method showed 5.4p% superior attack performance compared to the similar existing methods. Finally, we analyze in detail the reasons for this performance improvement in terms of the number of image transformations and their randomness. |