Content area

Abstract

Generative and general-purpose AI systems stand poised to reshape longstanding information infrastructures and professions, ranging from search to social media to online journalism. Yet questions surrounding subtle biases, misinforming output, and system reliability and transparency – epistemic risks related to the way knowledge is encoded and disseminated – have followed these technologies since their inception. Without strategies for understanding and managing the risks they pose, general-purpose models may degrade the reliability of the information ecosystem, as well as introduce hazards for the individuals and institutions deploying them. This dissertation introduces methods to understand epistemic risks in generative and general-purpose AI and approaches to responsibly deploy these systems in the presence of inevitable epistemic risk.

Concretely, this dissertation develops three approaches to epistemic risk in generative and general-purpose AI. First, I introduce computational approaches to identifying both the manifestations of epistemic risks like bias and misrepresentation and their underlying causes, such as the scale of a model’s pretraining dataset and the unanticipated biases present in high-quality media data such as online newspaper articles. Second, I introduce novel design frameworks that account for epistemic risk in generative models, taking into account the need for information integrity among organizations engaged in data-driven knowledge work, as well as among users in interpersonal communication online. Finally, I introduce transparency-maximizing approaches to mitigate the heightened epistemic risk of using generative models served over black-box APIs, including an approach that customizes small open models on consumer-grade GPUs, as well as a context-sensitive approach to the adoption of open and proprietary models that accounts for the needs of organizations engaged in human-centered data science work. Taken together, these approaches point toward a future for generative and general-purpose AI that values reliability and information integrity.

Details

1010268
Business indexing term
Title
Approaches to Epistemic Risk in Generative and General-Purpose AI
Number of pages
306
Publication year
2025
Degree date
2025
School code
0250
Source
DAI-A 87/1(E), Dissertation Abstracts International
ISBN
9798288835780
Committee member
Mitra, Tanushree
University/institution
University of Washington
Department
Information School
University location
United States -- Washington
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
31937497
ProQuest document ID
3230313384
Document URL
https://www.proquest.com/dissertations-theses/approaches-epistemic-risk-generative-general/docview/3230313384/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
2 databases
  • ProQuest One Academic
  • ProQuest One Academic