Abstract

Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions.

Details

Title
A Survey on the Explainability of Supervised Machine Learning
Author
Burkart, Nadia; Huber, Marco F
Pages
245-317
Section
Articles
Publication year
2021
Publication date
2021
Publisher
AI Access Foundation
ISSN
10769757
e-ISSN
19435037
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2553249126
Copyright
© 2021. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the associated terms available at https://www.jair.org/index.php/jair/about