Content area

Abstract

Radiology is undergoing a paradigm shift from traditional single-function AI systems to sophisticated multi-agent networks capable of autonomous reasoning, coordinated decision-making, and adaptive workflow management. These agentic AI systems move beyond simple pattern recognition to encompass complex radiological workflows including image analysis, report generation, clinical communication, and care coordination. While multi-agent radiological AI promises enhanced diagnostic accuracy, improved workflow efficiency, and reduced physician burden, it simultaneously amplifies the long-standing “black box” problem. Traditional explainable AI methods, which are adequate for understanding isolated diagnostic predictions, fail when applied to multi-step reasoning processes involving multiple specialized agents coordinating across imaging interpretation, clinical correlation, and treatment planning. This paper examines how agentic AI systems in radiology create “compound opacity” layers of inscrutability from agent interactions and distributed decision-making processes. We analyze the autonomy–transparency paradox specific to radiological practice, where increasing AI capability directly conflicts with interpretability requirements essential for clinical trust and regulatory oversight. Through examination of emerging multi-agent radiological workflows, we propose frameworks for responsible implementation that preserve both diagnostic innovation and the fundamental principles of medical transparency and accountability.

Full text

Turn on search term navigation

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.