Content area

Abstract

With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents' demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.

Details

Title
The Moral Machine experiment
Author
Awad, Edmond 1 ; Dsouza, Sohan 1 ; Kim, Richard 1 ; Schulz, Jonathan 2 ; Henrich, Joseph 2 ; Shariff, Azim; Bonnefon, Jean-François; Rahwan, Iyad

 The Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA 
 Department of Human Evolutionary Biology, Harvard University, Cambridge, MA, USA 
Pages
59-3,64A-64K
Section
ARTICLE
Publication year
2018
Publication date
Nov 1, 2018
Publisher
Nature Publishing Group
ISSN
00280836
e-ISSN
14764687
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2136863808
Copyright
Copyright Nature Publishing Group Nov 1, 2018