KEYWORDS
Algorithms, Algorithm Aversion, Algorithm Adoption, Task Objectiveness, Human-likeness, Trust
The rise of algorithms x Algorithms - sets of steps that a computer follows to perform certain tasks - are increasingly entering consumers' everyday lives. Thanks to the rapid progress in the field of artificial intelligence, algorithms are able to understand and produce natural language and learn quickly from experience. They can accomplish an increasingly comprehensive list of tasks, from diagnosing some complex diseases to driving cars and providing legal advice. Algorithms can even perform seemingly subjective tasks such as detecting emotions in facial expressions and tones of voice. While many algorithms can outperform even expert humans, many consumers remain skeptical: Should they rely more on humans or on algorithms? According to previous findings, the default option is to rely on humans, even when doing so results in objectively worse outcomes. However, our research provides insight into when and why consumers are likely to use algorithms, and how marketers can increase their use.
Consumers' algorithm skepticism One reason why consumers have ambivalent feelings toward algorithms is related to the kind of abilities consumers typically associate with algorithms. Consumers tend to believe that machines lack fundamentally human capabilities that are emotional or intuitive in nature. While capabilities such as logic and rationality are seen as something humans and machines have in common, machines are not perceived to be human-like when it comes to affective or emotional aspects. Therefore, consumers often assume that algorithms will be less effective at tasks which humans approach with intuition or emotions.
As beliefs about a technology's effectiveness are fundamental determinants for its adoption, consumers tend to prefer humans in such cases. Whether or not consumers trust algorithms depends on the nature of the task to be performed, and also on the way the algorithm itself is presented. Framing both task and algorithm in appropriate ways can foster adoption of and trust in algorithms, according to our research.
Trust in algorithms depends on the characteristics of the task x Familiarity, the scope of consequences, and the perceived objectiveness of a task are important determinants of algorithm adoption by consumers. In general, consumers tend to rely more on algorithms they are already familiar with. For instance, algorithm-based movie recommendations on Netflix are quite convenient. Consumers also rely on algorithms for getting directions via smartphone. In general, past experience with algorithms increases trust and use.
Some tasks are much more consequential than others, like diagnosing or treating a disease. Performing such tasks poorly has more serious consequences than others with less potentially far-reaching outcomes, and consumers seem to be less willing to trust and rely on algorithms when the stakes are higher.
The main focus of this research was related to a third characteristic: the influence of the perceived objectiveness of a task, a quality that can be managed actively. The series of studies shows that consumers trust algorithms more for objective tasks that involve quantifiable and measurable facts than for subjective tasks, which are open to interpretation and based more on personal opinion or intuition. Objective tasks typically associated with more "cognitive" abilities are thus entrusted significantly more to algorithms than tasks perceived as being subjective and typically associated with more "emotional" abilities. For instance, consumers perceive data analysis or giving directions as very objective - and consider algorithms superior to expert humans for performing such tasks - while the opposite is true for tasks like hiring employees or recommending romantic partners.
Importantly, this research also shows that perceived task objectiveness is malleable. Re-framing a task like recommending romantic partners as actually benefiting from quantification makes the task seem more objective. This in turn increases consumers' willingness to use algorithms for that task.
Trust also depends on how the algorithm is perceived x As mentioned earlier, consumers believe in the cognitive capabilities of algorithms, though not in the "soft skills" that humans possess, even if this belief is becoming increasingly inaccurate. With the progress in AI, algorithms are increasingly capable of performing tasks typically associated with subjectivity and emotion. Machines can, for instance, already create highly valued paintings, write compelling poetry and music, predict which songs will be hits, and even accurately identify human emotion from facial expressions and tone of voice. Even though algorithms may accomplish these tasks using very different means than humans do, the fact that they have such capabilities makes them seem less distinct from humans. Making algorithms seem more human-like when it comes to these soft skills could therefore be a means to reduce algorithm aversion and encourage use, especially for tasks that are perceived as less objective.
How to encourage trust in and use of algorithms x Given that algorithms offer enormous potential for improving outcomes for both consumers and companies, encouraging their adoption can be in the entities' own best interest. Our results demonstrate that the following interventions can nudge consumers and managers into increased reliance on algorithms and better decisions.
* Provide evidence that algorithms work x One of the most intuitive approaches for increasing consumers' willingness to use algorithms is to provide them with empirical evidence of the algorithm's superior performance relative to humans. However, when the task is perceived as being subjective, this might not be convincing enough. Experiments indicate that consumers are less likely to believe in algorithm superiority compared to human judgement, even when provided with evidence to support this. In this case, additional interventions are necessary.
* Make the task seem more objective x Given that consumers trust in the cognitive capabilities of algorithms, another way to increase trust is to demonstrate that these capabilities are relevant for the task in question. This might be particularly useful for subjective tasks. In our studies, we found that algorithmic movie recommendations and recommendations for romantic partners were perceived as being much more reliable when the task framing emphasized how helpful quantitative analysis could be relative to intuition for those tasks. The results demonstrated that the perceived objectiveness of a given task is indeed malleable. Perceived objectiveness can be increased and impacts the perceived effectiveness of algorithms as well as trust in the algorithm for that task. A practical marketing intervention can therefore be used to increase trust in and use of algorithms for tasks that are typically seen as subjective.
* Present algorithms as more human-like x The third intervention we found to be useful was making algorithms seem more human-like, specifically along the affective or emotional dimensions of humanness. Figure 2 shows that increasing awareness of algorithms' affective human- likeness by explaining that algorithms can detect and understand human emotions encourages adoption of and trust in algorithms for subjective tasks. Although actual reliance on algorithms is typically lower when the task is seen as subjective, this effect can be eliminated by providing real examples of algorithms with human-like abilities.
While the general trend is clearly toward an increased use of algorithms in many domains of our private and corporate lives, the pace at which they are adopted - as well as the areas where they will be adopted first - depends on several factors. Managers face a balancing challenge: while increasing the capabilities of algorithmic products and services in subjective domains, they must simultaneously address consumers' and decision-makers' beliefs that algorithms might be less effective than humans at those tasks. Our results suggest several ways to reduce skepticism, increase trust, and smooth the transition of algorithms into our future lives. x
There are several ways to reduce algorithm skepticism and to smooth the transition of algorithms into our future lives.
BOX 1
An investigation of trust in algorithms
In a series of six experiments with over 56,000 participants we investigated what makes consumers rely on algorithms. We found that consumers tended to rely on algorithms for objective, less consequential tasks and for tasks they already had experience with. Further, we found ways to encourage reliance on algorithms.
Subjective tasks are entrusted to humans more than to machines
In one experiment we found that consumers are equally likely to click on ads for algorithm-based and human-based financial advice. For the more subjectively-perceived dating advice, in contrast, click rates for the algorithm-based option were significantly lower than for human-based advice (see Figure 1).
Perceived task objectiveness can be increased and impacts the perceived effectiveness of algorithms as well as trust in the algorithm.
FURTHER READING
Castelo, N.; Bos, M. W. and Lehmann, D. (2019): "Task dependent algorithm aversion", Journal of Marketing Research, Vol. 56(5), 809-825.
Logg, J.; Minson J. and Moore, D. (2019): "Algorithm Appreciation: People Prefer Algorithmic To Human Judgment", Organizational Behavior and Human Decision Processes, 151, 90-103.
Longoni, C.; Bonezzi, A. and Morewedge, C. (2019): "Resistance to Medical Artificial Intelligence", Journal of Consumer Research, forthcoming.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019. This work is published under http://creativecommons.org/licenses/by-nc-nd/3.0 (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
According to previous findings, the default option is to rely on humans, even when doing so results in objectively worse outcomes. While capabilities such as logic and rationality are seen as something humans and machines have in common, machines are not perceived to be human-like when it comes to affective or emotional aspects. [...]consumers often assume that algorithms will be less effective at tasks which humans approach with intuition or emotions. The series of studies shows that consumers trust algorithms more for objective tasks that involve quantifiable and measurable facts than for subjective tasks, which are open to interpretation and based more on personal opinion or intuition.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Professor of Marketing, University of Alberta, Edmonton, AB, Canada
2 Senior Research Scientist, Snap Inc., Santa Monica, CA, USA
3 George E. Warren Professor of Business, Columbia University, New York, NY, USA