It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Medical image segmentation is a critical component in clinical practice, facilitating accurate diagnosis, treatment planning, and disease monitoring. However, existing methods, often tailored to specific modalities or disease types, lack generalizability across the diverse spectrum of medical image segmentation tasks. Here we present MedSAM, a foundation model designed for bridging this gap by enabling universal medical image segmentation. The model is developed on a large-scale medical image dataset with 1,570,263 image-mask pairs, covering 10 imaging modalities and over 30 cancer types. We conduct a comprehensive evaluation on 86 internal validation tasks and 60 external validation tasks, demonstrating better accuracy and robustness than modality-wise specialist models. By delivering accurate and efficient segmentation across a wide spectrum of tasks, MedSAM holds significant potential to expedite the evolution of diagnostic tools and the personalization of treatment plans.
Segmentation is an important fundamental task in medical image analysis. Here the authors show a deep learning model for efficient and accurate segmentation across a wide range of medical image modalities and anatomies.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details



1 University Health Network, Peter Munk Cardiac Centre, Toronto, Canada (GRID:grid.231844.8) (ISNI:0000 0004 0474 0428); University of Toronto, Department of Laboratory Medicine and Pathobiology, Toronto, Canada (GRID:grid.17063.33) (ISNI:0000 0001 2157 2938); Vector Institute, Toronto, Canada (GRID:grid.494618.6) (ISNI:0000 0005 0272 1351)
2 Western University, Department of Computer Science, London, Canada (GRID:grid.39381.30) (ISNI:0000 0004 1936 8884)
3 University Health Network, Peter Munk Cardiac Centre, Toronto, Canada (GRID:grid.231844.8) (ISNI:0000 0004 0474 0428)
4 New York University, Tandon School of Engineering, New York, USA (GRID:grid.137628.9) (ISNI:0000 0004 1936 8753)
5 Yale University, Department of Electrical Engineering, New Haven, USA (GRID:grid.47100.32) (ISNI:0000 0004 1936 8710)
6 University Health Network, Peter Munk Cardiac Centre, Toronto, Canada (GRID:grid.231844.8) (ISNI:0000 0004 0474 0428); University of Toronto, Department of Laboratory Medicine and Pathobiology, Toronto, Canada (GRID:grid.17063.33) (ISNI:0000 0001 2157 2938); Vector Institute, Toronto, Canada (GRID:grid.494618.6) (ISNI:0000 0005 0272 1351); University of Toronto, Department of Computer Science, Toronto, Canada (GRID:grid.17063.33) (ISNI:0000 0001 2157 2938); UHN AI Hub, Toronto, Canada (GRID:grid.231844.8) (ISNI:0000 0004 0474 0428)