Enhancing Industrial IoT Cybersecurity with Explainable AI: A SHAP and LIME-Based Intrusion Detection Methodology
Tarih
Yazarlar
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Erişim Hakkı
Özet
Although the proliferation of Industrial Internet of Things (IIoT) systems has transformed industrial operations, it has also introduced significant cybersecurity challenges. Ensuring IIoT network security requires robust, interpretable models capable of detecting and mitigating threats. This study integrates Explainable Artificial Intelligence (XAI) techniques SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations) to enhance the interpretability of machine learning-based intrusion detection in IIoT. Using the WUSTL-IIoT-2021 dataset, we evaluated Conditional Variational Autoencoder (CVAE), Decision Tree (DT), and Random Forest (RF) models, analyzing their transparency and performance. SHAP and LIME identify critical features such as DstJitter, Dport, and SAppBytes, contributing to improved explainability. RF achieves near-perfect accuracy (99.99%), while optimized feature subsets maintain high accuracy with lower computational cost. The results highlight XAI's role in balancing accuracy, interpretability, and efficiency in IIoT cybersecurity, paving the way for more trustworthy intrusion detection systems.









