Content area
Full text
Contents
- Abstract
- The Rise of Artificial Intelligence
- Moral Outrage
- Motivation
- The Present Research
- Study 1: Perceived Prejudiced Motivation
- Method
- Participants
- Procedure
- Assessing Perceived Discrimination
- Assessing Perceptions of Objectivity
- Assessing Perceived Prejudiced Motivation
- Results
- Discussion and Replication
- Study 2: Algorithmic Outrage Deficit
- Method
- Participants
- Procedure
- Results
- Discussion and Replication
- Study 3: Outrage at the Company
- Method
- Participants
- Procedure
- Agent Introduction
- Measurement of Judgments Prediscrimination
- Outrage
- Permissibility
- Discrimination Manipulation
- Measurement of Judgments Postdiscrimination
- Outrage, Perceived Motivation
- Fairness
- Results
- Motivation, Permissibility, Fairness, and Math Ability
- Moral Outrage
- Mediation by Perceived Prejudiced Motivation
- Discussion
- Study 4: Positive and Negative Outcomes
- Method
- Participants
- Procedure
- Assessing Positive and Negative Evaluations of the Companies
- Manipulation and Attention Checks
- Results
- Manipulation Check
- Evaluation of Company
- Discussion
- Studies 5–7: Potential Moderators of the Algorithmic Outrage Deficit
- Study 5: Tech Workers
- Method
- Participants
- Procedure and Measures
- Assessing Outrage
- Additional Measures
- Knowledge About AI
- Results
- Discussion
- Study 6: Manipulating Programmers
- Method
- Participants
- Procedure
- Results
- Manipulation Check—Attribution of Prejudiced Motivation
- Moral Outrage
- Discussion
- Study 7: Anthropomorphism
- Method
- Participants
- Procedure
- Assessing Likeness as a Manipulation Check
- Assessing Perceived Prejudiced Motivation and Moral Outrage
- Results
- Manipulation Check
- Perceived Prejudiced Motivation and Moral Outrage
- Mediation
- Discussion
- Study 8: Perceived Legal Liability
- Method
- Participants
- Procedure
- Assessing Liability
- Results
- Attribution of Prejudiced Motivation
- Liability
- Mediation
- Discussion
- General Discussion
- Implications
- Limitations and Future Directions
- Concluding Remarks
Figures and Tables
Abstract
Companies and governments are using algorithms to improve decision-making for hiring, medical treatments, and parole. The use of algorithms holds promise for overcoming human biases in decision-making, but they frequently make decisions that discriminate. Media coverage suggests that people are morally outraged by algorithmic discrimination, but here we examine whether people are less outraged by algorithmic discrimination than by human discrimination. Eight studies test this algorithmic outrage deficit hypothesis in the context of gender discrimination in hiring practices across diverse participant groups (online samples, a quasi-representative sample, and a sample of tech workers). We find that people are less morally outraged by algorithmic (vs. human) discrimination and are less likely to hold the organization responsible. The algorithmic outrage deficit is driven by the reduced attribution of prejudicial motivation to algorithms....





