Content area

Abstract

We have entered a new era of machine learning (ML), where the most accurate algorithm with superior predictive power may not even be deployable, unless it is admissible under the regulatory constraints. This has led to great interest in developing fair, transparent and trustworthy ML methods. The purpose of this article is to introduce a new information-theoretic learning framework (admissible machine learning) and algorithmic risk-management tools (InfoGram, L-features, ALFA-testing) that can guide an analyst to redesign off-the-shelf ML methods to be regulatory compliant, while maintaining good prediction accuracy. We have illustrated our approach using several real-data examples from financial sectors, biomedical research, marketing campaigns, and the criminal justice system.

Details

Title
InfoGram and admissible machine learning
Author
Mukhopadhyay, Subhadeep 1   VIAFID ORCID Logo 

 United Analytics and Computational Intelligence Inc and H20.ai, Mountain View, USA 
Pages
205-242
Publication year
2022
Publication date
Jan 2022
Publisher
Springer Nature B.V.
ISSN
08856125
e-ISSN
15730565
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2625128873
Copyright
© The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2022.