Content area

Abstract

As artificial intelligence (AI) systems increasingly shape how humans interact with digital environments, the need for transparency, security, and robustness in intelligent decision making has become critical. This thesis explores how explainable and secure AI techniques can be integrated into modern human-computer interaction (HCI) systems to enhance trust, resilience, and alignment with human operators.

We present three related studies, each addressing a distinct challenge in the design of human-centered AI. First, we apply XAI methods, specifically Local Interpretable Model-Agnostic Explanations (LIME), to deep learning (DL) based CAPTCHA solvers. By interpreting model attention patterns, we uncover exploitable weaknesses in text CAPTCHA designs and propose improvements aimed at making human verification systems more transparent.

Second, we introduce a unified framework for evaluating machine learning (ML) robustness under structured data poisoning attacks. We assess model degradation across traditional classifiers, deep neural networks, Bayesian hybrids, and LLMs, using attacks such as label flipping, data corruption, and adversarial insertion. By incorporating LIME into our evaluation process, we move beyond accuracy scores to uncover attribution drift and internal failure patterns that are vital for building resilient AI systems.

Third, we propose a justification generation system powered by LLMs for real time automation. Using the Tennessee Eastman Process (TEP) dataset, we fine-tune a compact instruction-tuned model (FLAN-T5) to produce natural language explanations from structured sensor data. The results show that lightweight LLMs can be embedded into operator dashboards to deliver interpretable reasoning, enhance traceability, and support oversight in safety-sensitive settings.

Together, these studies outline a framework for building AI systems that are not only capable, but also transparent, secure, and human aligned. This work advances the field of human-centered AI by emphasizing interpretability and robustness as foundational elements in the future of interactive intelligent systems.

Details

1010268
Business indexing term
Title
Advancing Human-Computer Interaction Systems Through Explainable and Secure AI Integration
Number of pages
98
Publication year
2025
Degree date
2025
School code
0156
Source
MAI 87/3(E), Masters Abstracts International
ISBN
9798293805839
Committee member
Kim, Marina E.; Hu, Wen-Chen
University/institution
The University of North Dakota
Department
Computer Science
University location
United States -- North Dakota
Degree
M.S.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32171113
ProQuest document ID
3246414953
Document URL
https://www.proquest.com/dissertations-theses/advancing-human-computer-interaction-systems/docview/3246414953/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic