Home // CYBER 2020, The Fifth International Conference on Cyber-Technologies and Cyber-Systems // View article


Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks

Authors:
Faiq Khalid
Muhammad Abdullah Hanif
Muhammad Shafique

Keywords: Deep Neural Networks, Adversarial Attacks, Machine Learning Security, Fault-injection Attacks

Abstract:
From tiny pacemaker chips to aircraft collision avoidance systems, the state-of-the-art Cyber-Physical Systems (CPS) have increasingly started to rely on Deep Neural Networks (DNNs). However, as concluded in various studies, DNNs are highly susceptible to security threats, including adversarial attacks. In this paper, we first discuss different vulnerabilities that can be exploited for generating security attacks for neural network-based systems. We then provide an overview of existing adversarial and fault-injection-based attacks on DNNs. We also present a brief analysis to highlight different challenges in the practical implementation of the adversarial attacks. Finally, we also discuss various prospective ways to develop robust DNN-based systems that are resilient to adversarial and fault-injection attacks.

Pages: 24 to 29

Copyright: Copyright (c) IARIA, 2020

Publication date: October 25, 2020

Published in: conference

ISSN: 2519-8599

ISBN: 978-1-61208-818-1

Location: Nice, France

Dates: from October 25, 2020 to October 29, 2020