Content area
Full text
Abstract: This paper examines the integration of human factors engineering into Explainable Artificial Intelligence (ХА!) to develop Al systems that are both human-centered and technically robust. The increasing use of Al technologies in high-stakes domains, such as healthcare, finance, and emergency response, underscores the urgent need for explainability, trust, and transparency. However, the field of ХА! faces critical challenges, including the absence of standardized definitions and evaluation frameworks, which hinder the assessment and effectiveness of explainability techniques. Human factors engineering, an interdisciplinary field focused on optimizing human-system interactions, offers a comprehensive framework to address these challenges. By applying principles such as user-centered design, error management, and system adaptability, human factors engineering ensures Al systems align with human cognitive abilities and behavioral patterns. This alignment enhances usability, fosters trust, and reduces blind reliance on Al by ensuring explanations are clear, actionable, and tailored to diverse user needs. Additionally, human factors engineering emphasizes inclusivity and accessibility, promoting equitable Al systems that serve varied populations effectively. This paper explores the intersection of HFE and XAI, highlighting their complementary roles in bridging algorithmic complexity with actionable understanding. It further investigates how human factors engineering principles address sociotechnical challenges, including fairness, accountability, and inclusivity, in Al deployment. The findings demonstrate that the integration of human factors engineering and XAl advances the creation of Al systems that are not only technologically sophisticated but also ethically aligned and userfocused. This interdisciplinary synergy is a pathway to develop equitable, effective, and trustworthy Al solutions, fostering informed decision-making and enhancing user confidence across diverse applications.
Keywords: Artificial intelligence (Al), Explainable artificial intelligence (XAl), Human-Centered Artificial Intelligence (ACAI), Human factors, Human factors engineering (HFE)
1. Introduction
The rapid advancements in artificial intelligence (Al) have heightened concerns surrounding explainable artificial intelligence (XAI). Existing literature highlights the lack of standardized definitions and evaluation frameworks within the field of XAI, creating challenges in assessing the effectiveness of explainability techniques (Adadi & Berrada, 2018; Karimi et al., 2020; Rudin et al., 2021). This variability complicates the ability to draw meaningful conclusions about the efficacy of different approaches to explainability (Rudin, 2019). The Defense Advanced Research Projects Agency defines XAl as systems capable of providing human users with clear explanations of their decision-making processes, outlining system strengths and limitations,...




