Content area

Abstract

Automated processes are helping companies become more efficient at an accelerated pace. According to Burke et al. (2017), automation processes are dynamic, and such dynamism is datadriven. The resulting insights of data processing allow the implementation of real-time decisionmaking tasks that companies use to adjust their business strategies, keep their market share, and compete for more. However, those data-driven environments do not work in isolation; these are digital-data-driven setups where connectivity and integration are key elements for such collaboration.

As Daniel (2023) explained, a fundamental requirement for integrating systems is understanding how each connected system works. This understanding process includes a comprehensive picture of its functionality, architecture, protocols, data structures, and other components to help design the integration of such systems. However, automated decision-making models based on artificial intelligence (AI) algorithms are considered a “black box” that lacks transparency and interpretability.

Explainability is a concept derived from the EU General Data Protection Regulation implemented in May 2018 by the European Commission. This law requires describing the logic behind automated decision-making processes that can affect data subjects’ interests. As Selbst and Powles (2017) suggested, AI solutions can defy human understanding due to the complexity of their models. Thus, knowing how a system works is difficult to accomplish when AI solutions are involved.

With the approval of the EU Artificial Intelligent Act by the EU Commission in March 2024, the explainability requirements initially applicable only to those decision-making systems that processed personal data have been extended to all AI algorithms that processed data for systems categorized as high-risk (R. Jain, 2024). New legal obligations are listed in the EU AI Act for high-risk systems; some of these new obligations consist of adapting or producing AI systems to be transparent, explainable, and designed to allow human oversight.

Under the EU AI Act, high-risk systems are those that could negatively affect the safety or fundamental rights of an individual, group of individuals, society, or the environment in general. Kempf and Rauer (2024) explained that malfunctioning essential systems could risk people’s lives and health or disrupt social and economic activities. Thus, according to the EU AI Act (2024), critical infrastructure systems such as water supply, gas, and electricity are contained in this high-risk category.

Within the energy sector, and as Niet et al., (2021) defined, the power grid’s ‘System Operators’ are the ones who plan, build, and maintain the electricity distribution or transmission network and provide a fair electricity market and network connections. As per Articles 13 and 14 of the EU AI Act, the systems operator, or any deployer in charge of overseeing an AI system, should be trained and equipped with transparent and explainability elements to understand the capabilities and limitations of the AI solutions so they can stop, confirm, or overwrite the recommendations made by such a model.

The present dissertation completed a qualitative study, starting with exploratory research to explain the different concepts involved in the study and their relationships. Thus, document analysis, GT-Grounded Theory, and triangulation were used as the primary qualitative research methods to comprehensively explain the challenges regarding the Systems Integration (SI) of AI solutions that require Explainability (XAI) modules. As part of data triangulation methods, informal conversations with subject matter experts were conducted to share the findings of this research and gather insights related to the current stage of XAI's applicability in the energy sector.

Details

1010268
Title
Systems Integration Model for AI Solutions with Explainability Requirements
Number of pages
204
Publication year
2025
Degree date
2025
School code
0183
Source
DAI-A 86/12(E), Dissertation Abstracts International
ISBN
9798286406005
Committee member
Tanoos, James J.; Newton, Kathryne A.; Pistrui, David
University/institution
Purdue University
University location
United States -- Indiana
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32013750
ProQuest document ID
3224573470
Document URL
https://www.proquest.com/dissertations-theses/systems-integration-model-ai-solutions-with/docview/3224573470/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic