Content area
The increasing complexity of machine learning models necessitates robust methods for interpretability, particularly in clustering applications, where understanding group characteristics is critical. To this end, this paper introduces a novel framework that integrates cooperative game theory and explainable artificial intelligence (XAI) to enhance the interpretability of black-box clustering models. Our framework integrates approximated Shapley values with multi-level clustering to reveal hierarchical feature interactions, enabling both local and global interpretability. The validity of this framework is achieved by conducting extensive empirical evaluations of two datasets, the Portuguese wine quality benchmark and Beijing Multi-Site Air Quality dataset the framework demonstrates improved clustering quality and interpretability, with features such as density and total sulfur dioxide emerging as dominant predictors in the wine analysis, while pollutants like PM2.5 and NO2 significantly influence air quality clustering. Key contributions include a multi-level clustering approach that reveals hierarchical feature attribution, use of interactive visualizations produced by Altair and a single interpretability framework that validate the state-of-art baselines. As a result, the framework forms a strong basis of interpretable clustering in essential fields like healthcare, finance, and environmental surveillance, which reinforces its generalization with respect to each domain. The results underline the need for interpretability in machine learning, providing actionable insights for stakeholders in a variety of fields.