Content area

Abstract

This study introduces the Pixel-Level Interpretability (PLI) model, a novel framework designed to address critical limitations in medical imaging diagnostics by enhancing model transparency and diagnostic accuracy. The primary objective is to evaluate PLI’s performance against Gradient-Weighted Class Activation Mapping (Grad-CAM) and achieve fine-grained interpretability and improved localization precision. The methodology leverages the VGG19 convolutional neural network architecture and utilizes three publicly available COVID-19 chest radiograph datasets, consisting of over 1000 labeled images, which were preprocessed through resizing, normalization, and augmentation to ensure robustness and generalizability. The experiments focused on key performance metrics, including interpretability, structural similarity (SSIM), diagnostic precision, mean squared error (MSE), and computational efficiency. The results demonstrate that PLI significantly outperforms Grad-CAM in all measured dimensions. PLI produced detailed pixel-level heatmaps with higher SSIM scores, reduced MSE, and faster inference times, showcasing its ability to provide granular insights into localized diagnostic features while maintaining computational efficiency. In contrast, Grad-CAM’s explanations often lack the granularity required for clinical reliability. By integrating fuzzy logic to enhance visual and numerical explanations, PLI can deliver interpretable outputs that align with clinical expectations, enabling practitioners to make informed decisions with higher confidence. This work establishes PLI as a robust tool for bridging gaps in AI model transparency and clinical usability. By addressing the challenges of interpretability and accuracy simultaneously, PLI contributes to advancing the integration of AI in healthcare and sets a foundation for broader applications in other high-stake domains.

Details

1009240
Business indexing term
Title
Advancing AI Interpretability in Medical Imaging: A Comparative Analysis of Pixel-Level Interpretability and Grad-CAM Models
Volume
7
Issue
1
First page
12
Publication year
2025
Publication date
2025
Publisher
MDPI AG
Place of publication
Basel
Country of publication
Switzerland
e-ISSN
25044990
Source type
Scholarly Journal
Language of publication
English
Document type
Journal Article
Publication history
 
 
Online publication date
2025-02-06
Milestone dates
2024-10-11 (Received); 2025-01-28 (Accepted)
Publication history
 
 
   First posting date
06 Feb 2025
ProQuest document ID
3181640284
Document URL
https://www.proquest.com/scholarly-journals/advancing-ai-interpretability-medical-imaging/docview/3181640284/se-2?accountid=208611
Copyright
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Last updated
2025-11-17
Database
2 databases
  • Coronavirus Research Database
  • ProQuest One Academic