Content area

Abstract

Peer review plays a crucial role in accreditation and credentialing processes as it can identify outliers and foster a peer learning approach, facilitating error analysis and knowledge sharing. However, traditional peer review methods may fall short in effectively addressing the interpretive variability among reviewing and primary reading radiologists, hindering scalability and effectiveness. Reducing this variability is key to enhancing the reliability of results and instilling confidence in the review process. In this paper, we propose a novel statistical approach called “Bayesian Inter-Reviewer Agreement Rate” (BIRAR) that integrates radiologist variability. By doing so, BIRAR aims to enhance the accuracy and consistency of peer review assessments, providing physicians involved in quality improvement and peer learning programs with valuable and reliable insights. A computer simulation was designed to assign predefined interpretive error rates to hypothetical interpreting and peer-reviewing radiologists. The Monte Carlo simulation then sampled (100 samples per experiment) the data that would be generated by peer reviews. The performances of BIRAR and four other peer review methods for measuring interpretive error rates were then evaluated, including a method that uses a gold standard diagnosis. Application of the BIRAR method resulted in 93% and 79% higher relative accuracy and 43% and 66% lower relative variability, compared to “Single/Standard” and “Majority Panel” peer review methods, respectively. Accuracy was defined by the median difference of Monte Carlo simulations between measured and pre-defined “actual” interpretive error rates. Variability was defined by the 95% CI around the median difference of Monte Carlo simulations between measured and pre-defined “actual” interpretive error rates. BIRAR is a practical and scalable peer review method that produces more accurate and less variable assessments of interpretive quality by accounting for variability within the group’s radiologists, implicitly applying a standard derived from the level of consensus within the group across various types of interpretive findings.

Details

Title
Improving the Reliability of Peer Review Without a Gold Standard
Author
Äijö, Tarmo 1   VIAFID ORCID Logo  ; Elgort, Daniel 2 ; Becker, Murray 3 ; Herzog, Richard 4 ; Brown, Richard K. J. 5 ; Odry, Benjamin L. 6 ; Vianu, Ron 6 

 Covera Health, New York, USA 
 Covera Health, New York, USA; Present Address: Aster Insights, Tampa, USA 
 Covera Health, New York, USA; Rutgers Robert Wood Johnson Medical School, New Brunswick, USA (GRID:grid.430387.b) (ISNI:0000 0004 1936 8796) 
 Covera Health, New York, USA (GRID:grid.430387.b) 
 University of Michigan (Michigan Medicine), Department of Radiology, Ann Arbor, USA (GRID:grid.214458.e) (ISNI:0000 0004 1936 7347) 
 Covera Health, New York, USA (GRID:grid.214458.e) 
Publication title
Volume
37
Issue
2
Pages
489-503
Publication year
2024
Publication date
Apr 2024
Publisher
Springer Nature B.V.
Place of publication
New York
Country of publication
Netherlands
ISSN
08971889
e-ISSN
1618727X
Source type
Scholarly Journal
Language of publication
English
Document type
Journal Article
Publication history
 
 
Online publication date
2024-02-05
Milestone dates
2024-01-19 (Registration); 2023-08-04 (Received); 2023-11-27 (Accepted); 2023-10-29 (Rev-Recd)
Publication history
 
 
   First posting date
05 Feb 2024
ProQuest document ID
3041683191
Document URL
https://www.proquest.com/scholarly-journals/improving-reliability-peer-review-without-gold/docview/3041683191/se-2?accountid=208611
Copyright
© The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Last updated
2024-07-18
Database
ProQuest One Academic