Abstract

Medical image segmentation is a critical component in clinical practice, facilitating accurate diagnosis, treatment planning, and disease monitoring. However, existing methods, often tailored to specific modalities or disease types, lack generalizability across the diverse spectrum of medical image segmentation tasks. Here we present MedSAM, a foundation model designed for bridging this gap by enabling universal medical image segmentation. The model is developed on a large-scale medical image dataset with 1,570,263 image-mask pairs, covering 10 imaging modalities and over 30 cancer types. We conduct a comprehensive evaluation on 86 internal validation tasks and 60 external validation tasks, demonstrating better accuracy and robustness than modality-wise specialist models. By delivering accurate and efficient segmentation across a wide spectrum of tasks, MedSAM holds significant potential to expedite the evolution of diagnostic tools and the personalization of treatment plans.

Segmentation is an important fundamental task in medical image analysis. Here the authors show a deep learning model for efficient and accurate segmentation across a wide range of medical image modalities and anatomies.

Details

Title
Segment anything in medical images
Author
Ma, Jun 1 ; He, Yuting 2 ; Li, Feifei 3   VIAFID ORCID Logo  ; Han, Lin 4 ; You, Chenyu 5   VIAFID ORCID Logo  ; Wang, Bo 6   VIAFID ORCID Logo 

 University Health Network, Peter Munk Cardiac Centre, Toronto, Canada (GRID:grid.231844.8) (ISNI:0000 0004 0474 0428); University of Toronto, Department of Laboratory Medicine and Pathobiology, Toronto, Canada (GRID:grid.17063.33) (ISNI:0000 0001 2157 2938); Vector Institute, Toronto, Canada (GRID:grid.494618.6) (ISNI:0000 0005 0272 1351) 
 Western University, Department of Computer Science, London, Canada (GRID:grid.39381.30) (ISNI:0000 0004 1936 8884) 
 University Health Network, Peter Munk Cardiac Centre, Toronto, Canada (GRID:grid.231844.8) (ISNI:0000 0004 0474 0428) 
 New York University, Tandon School of Engineering, New York, USA (GRID:grid.137628.9) (ISNI:0000 0004 1936 8753) 
 Yale University, Department of Electrical Engineering, New Haven, USA (GRID:grid.47100.32) (ISNI:0000 0004 1936 8710) 
 University Health Network, Peter Munk Cardiac Centre, Toronto, Canada (GRID:grid.231844.8) (ISNI:0000 0004 0474 0428); University of Toronto, Department of Laboratory Medicine and Pathobiology, Toronto, Canada (GRID:grid.17063.33) (ISNI:0000 0001 2157 2938); Vector Institute, Toronto, Canada (GRID:grid.494618.6) (ISNI:0000 0005 0272 1351); University of Toronto, Department of Computer Science, Toronto, Canada (GRID:grid.17063.33) (ISNI:0000 0001 2157 2938); UHN AI Hub, Toronto, Canada (GRID:grid.231844.8) (ISNI:0000 0004 0474 0428) 
Pages
654
Publication year
2024
Publication date
2024
Publisher
Nature Publishing Group
e-ISSN
20411723
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2917422361
Copyright
© The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.