Enhancing Industrial IoT Cybersecurity with Explainable AI: A SHAP and LIME-Based Intrusion Detection Methodology

[ X ]

Tarih

2025

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

IEEE

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

Although the proliferation of Industrial Internet of Things (IIoT) systems has transformed industrial operations, it has also introduced significant cybersecurity challenges. Ensuring IIoT network security requires robust, interpretable models capable of detecting and mitigating threats. This study integrates Explainable Artificial Intelligence (XAI) techniques SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations) to enhance the interpretability of machine learning-based intrusion detection in IIoT. Using the WUSTL-IIoT-2021 dataset, we evaluated Conditional Variational Autoencoder (CVAE), Decision Tree (DT), and Random Forest (RF) models, analyzing their transparency and performance. SHAP and LIME identify critical features such as DstJitter, Dport, and SAppBytes, contributing to improved explainability. RF achieves near-perfect accuracy (99.99%), while optimized feature subsets maintain high accuracy with lower computational cost. The results highlight XAI's role in balancing accuracy, interpretability, and efficiency in IIoT cybersecurity, paving the way for more trustworthy intrusion detection systems.

Açıklama

7th International Congress on Human-Computer Interaction, Optimization and Robotic Applications-ICHORA

Anahtar Kelimeler

IIoT, SHAP, LIME, cybersecurity, intrusion detection, decision tree, random forest, conditional variational autoencoder

Kaynak

2025 7th International Congress On Human-Computer Interaction, Optimization and Robotic Applications, Ichora

WoS Q Değeri

Scopus Q Değeri

Cilt

Sayı

Künye