Paper
4 January 2021 DOME-T: adversarial computer vision attack on deep learning models based on Tchebichef image moments
Author Affiliations +
Proceedings Volume 11605, Thirteenth International Conference on Machine Vision; 116050D (2021) https://doi.org/10.1117/12.2587268
Event: Thirteenth International Conference on Machine Vision, 2020, Rome, Italy
Abstract
In this paper, a novel black box adversarial computer vision attack is proposed. The introduced attack is based on removing from images some components described by their Tchebichef discrete orthogonal moments, rather than to perturb them. The contribution of this work is focused on the addition of one more clue, supporting the critical hypothesis that computer vision systems fail because they support their decisions not only in robust features but also in others non-robust ones. In this, context non-robust image features described in terms of Tchebichef moments are excluded from the original images and the approximated reconstructed versions of them are used as adversarial examples in order to attack some popular deep learning models. The experiments justify the effectiveness of the proposed adversarial attack in terms of imperceptibility and recognition error rate of the deep learning classifiers. It is worth noting that the top-1 accuracy of the attacked models was degraded by a factor between 9.48%-70.89% for adversarial images of 65dB to 57dB PSNR values. The corresponding degradation of the top-5 models’ accuracy was between 6.9% and 55.14% for the same quality images. Moreover, the proposed attack seems to have more strength than the Fast Gradient Sign Method (FGSM) attacking method traditionally applying in most cases. These results reveal that the proposed attack is able to exploit the vulnerability of the deep learning models’ towards degrading their generalization abilities. In this paper, a novel black box adversarial computer vision attack is proposed. The introduced attack is based on removing from images some components described by their Tchebichef discrete orthogonal moments, rather than to perturb them. The contribution of this work is focused on the addition of one more clue, supporting the critical hypothesis that computer vision systems fail because they support their decisions not only in robust features but also in others non-robust ones. In this, context non-robust image features described in terms of Tchebichef moments are excluded from the original images and the approximated reconstructed versions of them are used as adversarial examples in order to attack some popular deep learning models. The experiments justify the effectiveness of the proposed adversarial attack in terms of imperceptibility and recognition error rate of the deep learning classifiers. It is worth noting that the top-1 accuracy of the attacked models was degraded by a factor between 9.48%-70.89% for adversarial images of 65dB to 57dB PSNR values. The corresponding degradation of the top-5 models’ accuracy was between 6.9% and 55.14% for the same quality images. Moreover, the proposed attack seems to have more strength than the Fast Gradient Sign Method (FGSM) attacking method traditionally applying in most cases. These results reveal that the proposed attack is able to exploit the vulnerability of the deep learning models’ towards degrading their generalization abilities.
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
T. Maliamanis and G. A. Papakostas "DOME-T: adversarial computer vision attack on deep learning models based on Tchebichef image moments", Proc. SPIE 11605, Thirteenth International Conference on Machine Vision, 116050D (4 January 2021); https://doi.org/10.1117/12.2587268
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top