Content area

Abstract

Many real-world problems are modeled as graphs that represent relationships between entities. Graph Neural Networks (GNNs) are a powerful variant of neural networks that combine vertex and edge attributes with node neighborhood structures to infer properties of graph data. Message Passing Neural Networks (MPNNs), a common type of GNN, leverage the expressiveness of the first-order Weisfeiler-Leman (1-WL) algorithm for learning representations for classification tasks. However, 1-WL has known limitations in expressiveness and these limitations pose serious limitations on GNN performance. Separately, eXplainable Artificial Intelligence (XAI) is a sub-field of Machine Learning focused on addressing the “black-box” nature of neural networks. Several projects, such as GNNExplainer, have explored providing post-hoc explanations for GNN predictions. This current work combines XAI methods with graph mining to develop a computational framework to improve GNN performance.

The following are the main themes of this work: (1) A new computational framework called Explanation Enhanced Graph Learning (EEGL) to address the performance limitations of GNNs. We achieve this by annotating the input with relevant local structural information based on explanation artifacts and graph mining. Through experiments, we show that data annotated in this way results in higher model performance. (2) We study four different types of noise in our synthetic data and their effects on GNN learnability. We then show that EEGL can mitigate these adverse effects leading to improved performance even in noisy data. (3) GNNs as Logical Classifiers: Logical characterization involves a structured way to analyze and define the expressiveness of GNN models, e.g., “learning a query,” meaning learning a node classification problem in a unified manner across all graphs using a single logic formula. Through experiments, we examine the inductive learning characteristics of GNNs and the models’ ability to generalize on graphs that encode the same logical rule across structurally diverse graphs. (4) Philosophy of Science Perspective on Explainable AI: We briefly explore the philosophy of science perspective on Explainable AI. In this study, we discuss a high-level framework called ExpSpec for contextualizing requirements and defining a set of requirements for explanations.

Details

1010268
Business indexing term
Title
A Framework for Enhancing Graph Neural Networks Using Explanations
Number of pages
168
Publication year
2025
Degree date
2025
School code
0799
Source
DAI-B 86/12(E), Dissertation Abstracts International
ISBN
9798280767836
Committee member
DasGupta, Bhaskar; Sun, Xiaorui; Medya, Sourav; Horváth, Tamás
University/institution
University of Illinois at Chicago
Department
Computer Science
University location
United States -- Illinois
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32154484
ProQuest document ID
3222438052
Document URL
https://www.proquest.com/dissertations-theses/framework-enhancing-graph-neural-networks-using/docview/3222438052/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic