Content area

Abstract

Multi-panel images play an essential role in medical diagnostics and represent approximately 50% of the medical literature. These images serve as important tools for physicians to align various medical data (e.g., X-rays, MRIs, CT scans) of a patient into a consolidated image. This consolidated multi-panel image, represented by its component sub-images, contributes to a thorough representation of the patient’s case during diagnosis. However, extracting sub-images from the multi-panel images poses significant challenges for medical image retrieval systems, especially when dealing with regular and irregular image layouts. To address these challenges, this paper presents a novel hybrid framework that significantly enhances sub-image retrieval. The framework classifies medical images, employs advanced computer vision and image processing techniques including image projection profiles and morphological operations, and performs efficient segmentation of various multi-panel image types including regular and irregular medical images. The hybrid approach ensures accurate indexing and facilitates fast retrieval of sub-images by medical image retrieval systems. To validate the proposed framework, experiments were conducted on a set of medical images from publicly available datasets, including ImageCLEFmed 2013 to ImageCLEFmed 2016. The results show better performance compared to other methods, attaining an accuracy of 90.50% in image type identification and 91% and 92% in regular and irregular multi-panel image segmentation tasks, respectively. By achieving accurate and efficient segmentation across diverse multi-panel image types, our framework demonstrates significant potential to improve the performance of medical image retrieval systems.

Full text

Turn on search term navigation

© 2025 Gul et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.