Content area

Abstract

[...]Section 6 presents the conclusion, summarizing key insights and implications. 2. Explainability is typically unnecessary in two key scenarios: (1) when the outcomes have minimal impact and do not carry significant consequences, and (2) when the problem is well-understood, and the system's decisions are considered reliable, such as in applications like advertisement systems and postal code sorting (Adadi & Berrada, 2018; Doshi-Velez & Kim, 2017). [...]evaluating contexts where explanations and interpretations offer meaningful value (Adadi 8: [...]explanation accuracy requires correctly representing the processes leading to the system's outputs and maintaining fidelity to the Al model's operations (Phillips et al., 2021). [...]the knowledge limits principle asserts that the system should recognize and signal when functioning beyond its design parameters or lacks sufficient confidence in its output, safeguarding against inappropriate or unreliable responses in uncertain conditions (Phillips et al., 2021). [...]industries may favor less accurate but more interpretable models.

Details

Business indexing term
Title
Human Factors Engineering in Explainable AI: Putting People First
Pages
313-322
Publication year
2025
Publication date
Mar 2025
Publisher
Academic Conferences International Limited
Place of publication
Reading
Country of publication
United Kingdom
Publication subject
Source type
Conference Paper
Language of publication
English
Document type
Conference Proceedings
ProQuest document ID
3202191420
Document URL
https://www.proquest.com/conference-papers-proceedings/human-factors-engineering-explainable-ai-putting/docview/3202191420/se-2?accountid=208611
Copyright
Copyright Academic Conferences International Limited 2025
Last updated
2025-05-10
Database
ProQuest One Academic