Content area
AI systems are increasingly being utilized to support high-stakes decisions in various domains, including education, healthcare, finance, and employment. While AI-based Decision Support Systems (DSS) promise efficiency by reducing cognitive load and analyzing complex data, their application in human-centered domains can undermine stakeholder agency, produce misaligned outcomes, and overlook broader societal goals. A key limitation is that these systems often rely on pattern recognition from observed behavior rather than explicitly modeling how experts reason through uncertainty, competing priorities, and contextual trade-offs. As a result, they struggle to support decisions where quality depends not only on optimizing a single metric but also on aligning with stakeholder values and following principled reasoning processes.
This thesis introduces a decision-theoretic framework for designing AI-based DSS that extends data-driven AI models such as generative and statistical learning systems to explicitly model how domain experts structure problems, weigh competing values, and anticipate long-term impacts. The framework draws from decision analysis and multi-attribute utility theory (MAUT) to represent and balance competing objectives, incorporates Bayesian belief updating to reason over uncertainty in experts’ latent mental states, and uses structured decision representations to connect observed actions to unobservable preferences and principles.
To demonstrate the need for modeling experts’ mental states in decision-making processes, the thesis draws on two complementary studies. The first examines preparedness assessment decisions in a large graduate Data Science program, revealing that expert decisions, such as using Bloom’s taxonomy or balancing evaluation rigor with student burden, reflect complex priorities that cannot be reduced to a single metric. The second, on user consent in data personalization, demonstrates that when systems attempt to learn as much as possible about users for the platform’s benefit, they can misrepresent what users actually want if mental states such as awareness and agency are overlooked.
Building on these insights, the thesis develops a decision-theoretic framework for AI-assisted decision support that can integrate and model expert mental states, reasoning processes, and stakeholder trade-offs. In the large-scale context of collaborative content assessment on Wikipedia, the thesis demonstrates a scalable approach to capturing expert principles by curating high-quality decision traces and training machine learning models that reflect expert-aligned judgments. In a quantitative evaluation using simulated assessments and a qualitative study with domain experts on graduate data science preparedness assessments, the thesis demonstrates that decision-theoretic AI-based DSS yield more informative results while balancing stakeholder priorities, such as student burden, compared to existing AI baselines. Expert evaluations further reveal that meaningful decision support must account for broader utilities such as agency, transparency, and learning. While the framework advances the design of principled AI collaborators, it also surfaces richer expert preferences and trade-offs that point to future opportunities for refining stakeholder-aligned AI systems.