Full text

Turn on search term navigation

© The Author(s) 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Background

Surgeons who receive reliable feedback on their performance quickly master the skills necessary for surgery. Such performance-based feedback can be provided by a recently-developed artificial intelligence (AI) system that assesses a surgeon’s skills based on a surgical video while simultaneously highlighting aspects of the video most pertinent to the assessment. However, it remains an open question whether these highlights, or explanations, are equally reliable for all surgeons.

Methods

Here, we systematically quantify the reliability of AI-based explanations on surgical videos from three hospitals across two continents by comparing them to explanations generated by humans experts. To improve the reliability of AI-based explanations, we propose the strategy of training with explanations –TWIX –which uses human explanations as supervision to explicitly teach an AI system to highlight important video frames.

Results

We show that while AI-based explanations often align with human explanations, they are not equally reliable for different sub-cohorts of surgeons (e.g., novices vs. experts), a phenomenon we refer to as an explanation bias. We also show that TWIX enhances the reliability of AI-based explanations, mitigates the explanation bias, and improves the performance of AI systems across hospitals. These findings extend to a training environment where medical students can be provided with feedback today.

Conclusions

Our study informs the impending implementation of AI-augmented surgical training and surgeon credentialing programs, and contributes to the safe and fair democratization of surgery.

Plain language summary

Surgeons aim to master skills necessary for surgery. One such skill is suturing which involves connecting objects together through a series of stitches. Mastering these surgical skills can be improved by providing surgeons with feedback on the quality of their performance. However, such feedback is often absent from surgical practice. Although performance-based feedback can be provided, in theory, by recently-developed artificial intelligence (AI) systems that use a computational model to assess a surgeon’s skill, the reliability of this feedback remains unknown. Here, we compare AI-based feedback to that provided by human experts and demonstrate that they often overlap with one another. We also show that explicitly teaching an AI system to align with human feedback further improves the reliability of AI-based feedback on new videos of surgery. Our findings outline the potential of AI systems to support the training of surgeons by providing feedback that is reliable and focused on a particular skill, and guide programs that give surgeons qualifications by complementing skill assessments with explanations that increase the trustworthiness of such assessments.

Details

Title
A multi-institutional study using artificial intelligence to provide reliable and fair feedback to surgeons
Author
Kiyasseh, Dani 1   VIAFID ORCID Logo  ; Laca, Jasper 2 ; Haque, Taseen F. 2   VIAFID ORCID Logo  ; Miles, Brian J. 3 ; Wagner, Christian 4 ; Donoho, Daniel A. 5 ; Anandkumar, Animashree 1 ; Hung, Andrew J. 2   VIAFID ORCID Logo 

 California Institute of Technology, Department of Computing and Mathematical Sciences, Pasadena, USA (GRID:grid.20861.3d) (ISNI:0000000107068890) 
 University of Southern California, Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Los Angeles, USA (GRID:grid.42505.36) (ISNI:0000 0001 2156 6853) 
 Houston Methodist Hospital, Department of Urology, Houston, USA (GRID:grid.63368.38) (ISNI:0000 0004 0445 0041) 
 Prostate Center Northwest, St. Antonius-Hospital, Department of Urology, Pediatric Urology and Uro-Oncology, Gronau, Germany (GRID:grid.490549.5) (ISNI:0000 0004 6102 8007) 
 Center for Neuroscience, Children’s National Hospital, Division of Neurosurgery, Washington DC, USA (GRID:grid.239560.b) (ISNI:0000 0004 0482 1586) 
Pages
42
Publication year
2023
Publication date
Dec 2023
Publisher
Springer Nature B.V.
e-ISSN
2730664X
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2792812877
Copyright
© The Author(s) 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.