It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
In recent years there has been a significant increase in the integration of Information Technology (IT) with Operational Technology (OT). As a result, there are now more vulnerabilities available to attackers than ever before within the Industrial Internet of Things (IIoT), and many of these vulnerabilities can be found at the critical infrastructure level. Unfortunately, most of the current methods of detecting cyberattacks such as rule-based Intrusion Detection Systems (IDS) and the use of manually monitored systems, are unable to quickly identify new types of attacks or provide immediate information on the scope of damage done during those attacks. This is why this research is developing and evaluating an Artificial Intelligence (AI) driven Risk Quantification Model for Modbus-based IIoT systems using supervised machine learning. In order to evaluate the performance of the Risk Quantification Model, a dataset called ToN_IoT Modbus was used, which is a dataset of network behaviors and device telemetry provided by the University of New South Wales. Raw data from both Network Behavior Data and Device Telemetry Data was analyzed to determine when abnormal behavior had occurred in the IIoT System.
Once the data was transformed into a high-fidelity format for analysis, the data was normalized and timestamped for subsequent analysis. Stratified random sampling was then performed on the data to train and validate a Random Forest classifier that could categorize data as either normal or abnormal. The classifier achieved a detection accuracy of 94.5%, an F1 Score of 0.94 and an AUC ROC of 0.975, indicating that the Random Forest classifier achieved significantly higher accuracy for detecting abnormal behavior in the IIoT System compared to previous traditional rule-based Intrusion Detection Systems (IDS). Statistical testing of the results using t-tests and bootstrapping confirmed the validity and reliability of the results of this study.
The results from this study support all three Research Hypotheses and show that the use of machine learning based risk quantification improves detection accuracy, reduces MTTR greater than 50% and improves classification performance of the IIoT system through the use of both network data and telemetry data. These empirical results suggest there is potential for future integration of AI-based models into industrial cybersecurity monitoring systems to provide an additional layer of defense against the growing number of IIoT cyber threats.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer





