Content area

Abstract

Malware continues to pose a critical threat to computing systems, with modern techniques often bypassing traditional signature-based defenses. Ensemble-boosting classifiers, including GBC, XGBoost, AdaBoost, LightGBM, and CatBoost, have shown strong predictive performance for malware detection, yet their “black-box” nature limits transparency, interpretability, and trust, all of which are essential for deployment in high-stakes cybersecurity environments. This paper proposes a unified explainable AI (XAI) framework to address these challenges by improving the interpretability, fairness, transparency, and efficiency of ensemble-boosting models in malware and intrusion detection tasks. The framework integrates SHAP for global feature importance and complex interaction analysis; LIME for local, instance-level explanations; and DALEX for fairness auditing across sensitive attributes, ensuring that predictions remain both equitable and meaningful across diverse user populations. We rigorously evaluate the framework on a large-scale, balanced dataset derived from Microsoft Windows Defender telemetry, covering various types of malware. Experimental results demonstrate that the unified XAI approach not only achieves high malware detection accuracy but also uncovers complex feature interactions, such as the combined effects of system configuration and security states. To establish generalization, we further validate the framework on the CICIDS-2017 intrusion detection dataset, where it successfully adapts to different network threat patterns, highlighting its robustness across distinct cybersecurity domains. Comparative experiments against state-of-the-art XAI tools, including AnchorTabular (rule-based explanations) and Fairlearn (fairness-focused analysis), reveal that the proposed framework consistently delivers deeper insights into model behavior, achieves better fairness metrics, and reduces explanation overhead. By combining global and local interpretability, fairness assurance, and computational optimizations, this unified XAI framework offers a scalable, human-understandable, and trustworthy solution for deploying ensemble-boosting models in real-world malware detection and intrusion prevention systems.

Full text

Turn on search term navigation

© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.