Content area
Full text
1. Introduction
Intelligent personal assistants (IPAs) are personalized intelligent systems that can understand voice-based requests and help users solve problems interacting with them. They communicate through natural language to complete a wide variety of tasks, including getting weather forecasts, reading news and ordering products (Chattaraman et al., 2019). The capability to interact with users in natural languages increases the attribution of human characteristics to the IPAs (Moussawi and Koufari, 2019). These agents feature anthropomorphic aspects such as communication, companionships, autonomy, intelligent searching skills, learning and adaptation to change.
Despite the growth of IPAs designed to make conversations like people, few studies have explored how actual users' anthropomorphism impacts perceptions associated with interactions and attitudes toward these conversational partners. Little is known about the psychological processes by which anthropomorphism affects IPA users. As studies on anthropomorphism focused on using visual portrayals rather than linguistic features, the impact of voice and verbal representation on user attitudes is worth investigating.
This work aims to understand the anthropomorphism of experienced users who adopted IPAs instead of potential users in private settings. These IPAs have transitioned from being embedded in personal smartphones to stand-alone devices at home. They are evolving to recommend products and influence decision-making by understanding the context and needs of consumers. They collect and process users' sensitive data in more invasive ways (Vitak, 2020). Due to the impact on decisions and privacy concerns, trust has become more crucial in recent years. Thus, this study focuses on trust that plays a vital role in the rapid development of human–agent interaction (Kim and Song, 2021) and keeps users motivated and cooperative (Nothdurft et al., 2013). Trust plays a vital role in adopting and forming continuous usage intention (Nasirian et al., 2017).
As only trusted social entities can interact with users to cocreate new knowledge and augment human intelligence, AI assistants' progression from cognitive tools to collaborators or coaches is indeed a progression of trust (Siddike et al., 2018). However, there is no consensus regarding whether perceived anthropomorphism affects trust (e.g. Moussawi et al., 2020; Natarajan and Gombolay, 2020). The fact that coproduction of services is inevitable and that researchers have not yet discussed how much users trust the assistants’ calls for research on trust.





