It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
While the literature on putting a “human in the loop” in artificial intelligence (AI) and machine learning (ML) has grown significantly, limited attention has been paid to how human expertise ought to be combined with AI/ML judgments. This design question arises because of the ubiquity and quantity of algorithmic decisions being made today in the face of widespread public reluctance to forgo human expert judgment. To resolve this conflict, we propose that human expert judges be included via appeals processes for review of algorithmic decisions. Thus, the human intervenes only in a limited number of cases and only after an initial AI/ML judgment has been made. Based on an analogy with appellate processes in judiciary decision-making, we argue that this is, in many respects, a more efficient way to divide the labor between a human and a machine. Human reviewers can add more nuanced clinical, moral, or legal reasoning, and they can consider case-specific information that is not easily quantified and, as such, not available to the AI/ML at an initial stage. In doing so, the human can serve as a crucial error correction check on the AI/ML, while retaining much of the efficiency of AI/ML’s use in the decision-making process. In this paper, we develop these widely applicable arguments while focusing primarily on examples from the use of AI/ML in medicine, including organ allocation, fertility care, and hospital readmission.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details




1 Artificial Intelligence, and the Law (PMAIL), The Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, The Project on Precision Medicine, Cambridge, USA (GRID:grid.429309.5); Harvard Law School, Cambridge, USA (GRID:grid.38142.3c) (ISNI:000000041936754X)
2 University of Toronto, Toronto, Canada (GRID:grid.17063.33) (ISNI:0000 0001 2157 2938)
3 Artificial Intelligence, and the Law (PMAIL), The Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, The Project on Precision Medicine, Cambridge, USA (GRID:grid.429309.5); Penn State Dickinson Law, Carlisle, USA (GRID:grid.29857.31) (ISNI:0000 0001 2097 4281)
4 INSEAD, Fontainebleau, France (GRID:grid.424837.e) (ISNI:0000 0004 1791 3287); INSEAD, Singapore, Singapore (GRID:grid.469459.3)
5 INSEAD, Fontainebleau, France (GRID:grid.424837.e) (ISNI:0000 0004 1791 3287)