It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Out-of-distribution (OOD) generalization, especially for medical setups, is a key challenge in modern machine learning which has only recently received much attention. We investigate how different convolutional pre-trained models perform on OOD test data—that is data from domains that have not been seen during training—on histopathology repositories attributed to different trial sites. Different trial site repositories, pre-trained models, and image transformations are examined as specific aspects of pre-trained models. A comparison is also performed among models trained entirely from scratch (i.e., without pre-training) and models already pre-trained. The OOD performance of pre-trained models on natural images, i.e., (1) vanilla pre-trained ImageNet, (2) semi-supervised learning (SSL), and (3) semi-weakly-supervised learning (SWSL) models pre-trained on IG-1B-Targeted are examined in this study. In addition, the performance of a histopathology model (i.e., KimiaNet) trained on the most comprehensive histopathology dataset, i.e., TCGA, has also been studied. Although the performance of SSL and SWSL pre-trained models are conducive to better OOD performance in comparison to the vanilla ImageNet pre-trained model, the histopathology pre-trained model is still the best in overall. In terms of top-1 accuracy, we demonstrate that diversifying the images in the training using reasonable image transformations is effective to avoid learning shortcuts when the distribution shift is significant. In addition, XAI techniques—which aim to achieve high-quality human-understandable explanations of AI decisions—are leveraged for further investigations.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 University of Waterloo, Kimia Lab, Waterloo, Canada (GRID:grid.46078.3d) (ISNI:0000 0000 8644 1405)
2 University of Waterloo, Kimia Lab, Waterloo, Canada (GRID:grid.46078.3d) (ISNI:0000 0000 8644 1405); Mayo Clinic, Department of Laboratory Medicine and Pathology, Rochester, USA (GRID:grid.66875.3a) (ISNI:0000 0004 0459 167X)
3 University of Waterloo, Kimia Lab, Waterloo, Canada (GRID:grid.46078.3d) (ISNI:0000 0000 8644 1405); Brock University, Engineering Department, St. Catharines, Canada (GRID:grid.411793.9) (ISNI:0000 0004 1936 9318)
4 University of Waterloo, Kimia Lab, Waterloo, Canada (GRID:grid.46078.3d) (ISNI:0000 0000 8644 1405); Mayo Clinic, Department of Laboratory Medicine and Pathology, Rochester, USA (GRID:grid.66875.3a) (ISNI:0000 0004 0459 167X); Mayo Clinic, Rhazes Lab, Department of Artificial Intelligence and Informatics, Rochester, USA (GRID:grid.66875.3a) (ISNI:0000 0004 0459 167X)