Adversarial robustness of deep learning enabled industry 4.0 prognostics
No Thumbnail Available
Authors
Meeting name
Sponsors
Date
Journal Title
Format
Thesis
Subject
Abstract
The advent of Industry 4.0 in automation and data exchange leads us toward a constant evolution in smart manufacturing environments, including extensive utilization of Internet-of-Things (IoT) and Deep Learning (DL). Specifically, the state-of-the-art Prognostics and Health Management (PHM) has shown great success in achieving a competitive edge in Industry 4.0 by reducing maintenance cost, downtime, and increasing productivity by making data-driven informed decisions. These state-of-the-art PHM systems employ IoT device data and DL algorithms to make informed decisions/predictions of Remaining Useful Life (RUL). Unfortunately, IoT sensors and DL algorithms, both are prone to cyber-attacks. For instance, deep learning algorithms are known for their susceptibility to adversarial examples. Such adversarial attacks have been extensively studied in the computer vision domain. However, it is surprising that their impact on the PHM domain is yet not explored. Thus, modern data-driven intelligent PHM systems pose a significant threat to safety- and cost-critical applications. Towards this, in this thesis, we propose a methodology to design adversarially robust PHM systems by analyzing the effect of different types of adversarial attacks on several DL enabled PHM models. More specifically, we craft adversarial attacks using Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM) and evaluate their impact on Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Convolutional Neural Network (CNN), Bi-directional LSTM, and Multi-layer perceptron (MLP) based PHM models using the proposed methodology. The obtained results using NASA's turbofan engine, and a well-known battery PHM dataset show that these systems are vulnerable to adversarial attacks and can cause a serious defect in the RUL prediction. We also analyze the impact of adversarial training using the proposed methodology to enhance the adversarial robustness of the PHM systems. The obtained results show that adversarial training is successful in significantly improvising the robustness of these PHM models.
Table of Contents
PubMed ID
Degree
M.S.
Thesis Department
Rights
OpenAccess.
License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License. Copyright held by author.
