Utilizing Deep Convolutional Auto-encoder for Adversarial Attack Mitigation on Digital Images

##plugins.themes.bootstrap3.article.main##

Putu Widiarsa Kurniawan S Yosi Kristian Joan Santoso

Abstract

Adversarial attacks on digital images pose a serious threat to the utilization of machine learning technology in various real-life applications. The Fast Gradient Sign Method (FGSM) technique has proven to be effective in conducting attacks on machine learning models, including digital images found in the ImageNet dataset. This research aims to address this issue by utilizing the Deep Convolutional Auto-encoder (AE) technique as a method for mitigating adversarial attacks on digital images.The results of the study demonstrate that FGSM attacks can be performed on the majority of digital images, although there are certain images that are more resilient to such attacks. Furthermore, the AE mitigation technique proves to be effective in reducing the impact of adversarial attacks on most digital images. The accuracy of the attack and mitigation models is measured at 14.58% and 91.67%, respectively.

##plugins.themes.bootstrap3.article.details##

Section
Articles
References
Bank, Dor, Noam Koenigstein, and Raja Giryes. 2020. “Autoencoders.”
Bhagoji, Arjun Nitin, Daniel Cullina, Chawin Sitawarin, and Prateek Mittal. 2017. “Enhancing Robustness of Machine Learning Systems via Data Transformations.”
Carlini, Nicholas, and David Wagner. 2016. “Towards Evaluating the Robustness of Neural Networks.” Proceedings - IEEE Symposium on Security and Privacy 39–57. doi: 10.48550/arxiv.1608.04644.
Chakraborty, Anirban, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. “Adversarial Attacks and Defences: A Survey.”
Chrabaszcz, Patryk, Ilya Loshchilov, and Frank Hutter. 2017. “A Downsampled Variant of ImageNet as an Alternative to the CIFAR Datasets.”
Dyas Irvan Masruri Sugeng Widodo, and Febry Eka Purwiantono. 2021. “Implementation Of k-Means For Information Systems For The Spread Of Epidemic Diseases In Kota Malang”. Vol 9 No 02 (2021): J-Intech : Journal of Information and Technology. https://doi.org/10.32664/j-intech.v9i02.638
Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Networks.”
Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.”
Gu, Shixiang, and Luca Rigazio. 2014. “Towards Deep Neural Network Architectures Robust to Adversarial Examples.”
Howard, Andrew G., Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.”
Marzi, Zhinus, Soorya Gopalakrishnan, Upamanyu Madhow, and Ramtin Pedarsani. 2018. “Sparsity-Based Defense against Adversarial Attacks on Linear Classifiers.”
Papernot, Nicolas, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, Reuben Feinman, Alexey Kurakin, Cihang Xie, Yash Sharma, Tom Brown, Aurko Roy, Alexander Matyasko, Vahid Behzadan, Karen Hambardzumyan, Zhishuai Zhang, Yi-Lin Juang, Zhi Li, Ryan Sheatsley, Abhibhav Garg, Jonathan Uesato, Willi Gierke, Yinpeng Dong, David Berthelot, Paul Hendricks, Jonas Rauber, Rujun Long, and Patrick McDaniel. 2016. “Technical Report on the CleverHans v2.1.0 Adversarial Examples Library.”
Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2014. “ImageNet Large Scale Visual Recognition Challenge.”
Sahay, Rajeev, Rehana Mahfuz, and Aly El Gamal. 2018. “Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach.”
Sandler, Mark, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. “MobileNetV2: Inverted Residuals and Linear Bottlenecks.”