Content area

Abstract

As artificial intelligence (AI) increasingly integrates into scientific research, explainability has become a cornerstone for ensuring reliability and innovation in discovery processes. This review offers a forward-looking integration of explainable AI (XAI)-based research paradigms, encompassing small domain-specific models, large language models (LLMs), and agent-based large-small model collaboration. For domain-specific models, we introduce a knowledge-oriented taxonomy categorizing methods into knowledge-agnostic, knowledge-based, knowledge-infused, and knowledge-verified approaches, emphasizing the balance between domain knowledge and innovative insights. For LLMs, we examine three strategies for integrating domain knowledge—prompt engineering, retrieval-augmented generation, and supervised fine-tuning—along with advances in explainability, including local, global, and conversation-based explanations. We also envision future agent-based model collaborations within automated laboratories, stressing the need for context-aware explanations tailored to research goals. Additionally, we discuss the unique characteristics and limitations of both explainable small domain-specific models and LLMs in the realm of scientific discovery. Finally, we highlight methodological challenges, potential pitfalls, and the necessity of rigorous validation to ensure XAI’s transformative role in accelerating scientific discovery and reshaping research paradigms.

Full text

Turn on search term navigation

© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.