Content area
Purpose
The purpose of this study is to examine the effect of trust on user adoption of artificial intelligence-generated content (AIGC) based on the stimulus–organism–response.
Design/methodology/approach
The authors conducted an online survey in China, which is a highly competitive AI market, and obtained 504 valid responses. Both structural equation modelling and fuzzy-set qualitative comparative analysis (fsQCA) were used to conduct data analysis.
Findings
The results indicated that perceived intelligence, perceived transparency and knowledge hallucination influence cognitive trust in platform, whereas perceived empathy influences affective trust in platform. Both cognitive trust and affective trust in platform lead to trust in AIGC. Algorithm bias negatively moderates the effect of cognitive trust in platform on trust in AIGC. The fsQCA identified three configurations leading to adoption intention.
Research limitations/implications
The main limitation is that more factors such as culture need to be included to examine their possible effects on trust. The implication is that generative AI platforms need to improve the intelligence, transparency and empathy, and mitigate knowledge hallucination to engender users’ trust in AIGC and facilitate their adoption.
Originality/value
Existing research has mainly used technology adoption theories such as unified theory of acceptance and use of technology to examine AIGC user behaviour and has seldom examined user trust development in the AIGC context. This research tries to fill the gap by disclosing the mechanism underlying AIGC user trust formation.
1. Introduction
Generative artificial intelligence (AI) products, such as ChatGPT, have received increasing attention around the world. Powered by large language models (LLMs) and massive data sets, ChatGPT has significantly transformed the pattern of human–machine interaction. Motivated by the success of ChatGPT, numerous generative AI products have emerged in the market, such as Google Gemini, Anthropic’s Claude, Meta Llama, Baidu’s ERNIE Bot and Alibaba’s Tongyi Qianwen. Based on LLMs, AI systems are becoming increasingly human-like and intelligent. They have shown powerful capabilities in natural language processing, conversations, questions and answers, text generation and text translation. Research has shown that generative AI exhibits tremendous potential in various fields, such as enterprise management (Talaei-Khoei et al., 2024), financial decision-making (Oehler and Horn, 2024), healthcare (Howard et al., 2023), science education (Cooper, 2023) and smart libraries (Khan et al., 2023). The widespread application of generative AI in these domains highlights its pervasive influence. In the future, generative AI is likely to gradually penetrate into every aspect of society and people’s lives.
However, despite being a significant breakthrough in AI technology, generative AI also faces numerous challenges, such as knowledge hallucination and biases. Due to the flaws in algorithms and data sources, generative AI often generates false and biased information that contradicts facts. This may undermine users’ trust in both the platform and AI-generated content (AIGC) and decrease their adoption intention of AIGC. The low adoption rate of AIGC may lead to the failure of generative AI platforms in the intensely competitive environments (Lai et al., 2023). Then the research question is how to engender user trust in AIGC? Previous research has noted the effect of trust on continuance intention of ChatGPT (Baek and Kim, 2023), user acceptance of AI virtual assistants (Xiong et al., 2024), students’ intention to use ChatGPT (Rahman et al., 2023) and tourist acceptance of ChatGPT (Xu et al., 2024). However, it has seldom explored the development mechanism underlying user trust in AIGC. This research tries to fill the gap by using the stimulus–organism–response (SOR) to uncover AIGC user trust formation and its effect on user adoption.
Drawing on the SOR, this research examined the impact of trust on user adoption of AIGC. Stimulus reflects the features of generative AI, which include perceived intelligence, perceived transparency, knowledge hallucination, perceived empathy and perceived anthropomorphism. Organism includes cognitive trust in a platform, affective trust in a platform and trust in AIGC. Response refers to the user adoption intention of AIGC. We conducted an online survey in China, which is a highly competitive AI market where there are many generative AI products, such as Baidu’s ERNIE Bot, Ali’s Qianwen and Tencent’s Hunyuan. Baidu, Ali and Tencent are the largest search engine, e-commerce and social networking companies in China, respectively. The intense competition is called “the battle of hundreds of LLMs”. Generative AI platforms are eager to facilitate user adoption and expand the user base. This fits well with our research purpose. The results will reveal AIGC user trust development mechanism and its effect on user adoption. They may also help generative AI platforms adopt measures to engender user trust and promote user adoption to gain a competitive advantage.
2. Literature review
Artificial intelligence-generated content user behaviour
As an emerging form of content output, AIGC has attracted the attention of researchers. Table 1 lists a few popular LLMs in the world. Most of them have over 100 billion parameters, showing great processing ability. They often own rich functions such as text-to-text, text-to-image and text-to-video.
Existing research has examined AIGC user behaviours, such as intermittent discontinuance, adoption and continuance usage. Zhang et al. (2024) used the grounded theory to find that factors such as privacy concerns and system quality influence AIGC user intermittent discontinuance. Mao et al. (2024) reported that information quality, user perception and technical characteristics influence AIGC user adoption intention. Pham et al. (2024) noted that perceived warmth and perceived ability influence tourists’ use of AIGC to obtain travel advice. Li (2024) drew on the unified theory of acceptance and use of technology (UTAUT) to find that perceived anxiety, perceived risk, performance expectancy and effort expectancy influence designer intention to use AIGC for design assistance. Similarly, Menon and Shilpa (2023) noted that performance expectancy, effort expectancy and privacy concerns affect user intention to use generative AI. Ren and Wu (2024) argued that performance expectancy, effort expectancy and individual innovativeness influence user intention to use AIGC. Ma and Huo (2023) proposed that social influence and hedonic motivation affect user adoption of generative AI. Zhao et al. (2024) integrated both UTAUT2 and task-technology fit to find that performance expectancy, effort expectancy and task characteristics influence graduate students’ intention to continue using AIGC.
As evidenced by these studies, researchers have examined AIGC user behaviour based on the UTAUT and identified the effect of various factors, such as performance expectancy, effort expectancy and privacy concerns, on user adoption. However, little attention has been paid to trust development in the AIGC context. As noted earlier, issues such as knowledge hallucination and biases in AIGC may undermine user trust and adoption intention. Thus, this research will examine the effect of trust on user adoption of AIGC.
Stimulus–organism–response
The SOR suggests that external environmental factors (stimulus) influence an individual’s internal psychological states (organism), which in turn affect his or her behaviour (response) (Mehrabian and Russell, 1974). The SOR has been widely applied to examine user behaviour in information systems research. Pan and Li (2023) found that live streaming product demonstrations and advertising affect enriched psychological states, which further affect consumer behavioural intentions. Sampat and Raj (2022) argued that gratifications and personality traits affect the feeling of authenticating news, which further leads to sharing fake news on social media. Wang et al. (2023a, 2023b) found that the system quality and friendliness of online chatbots affect user satisfaction, trust and usage behaviour. Pang et al. (2024) noted that information relevance and media richness affect social network exhaustion, which in turn leads to health anxiety and COVID-19-related stress. Similarly, Yen (2024) argued that job demands and technology overload affect work stress during the COVID-19 pandemic.
Consistent with these studies, this research adopted the SOR to examine the effect of generative AI platform characteristics (stimuli) on user trust (organism) and adoption intention of AICG (response). Thus, SOR provides a useful lens to explore trust development and its effect on user adoption of AIGC. In comparison, UTAUT focuses on the technological factors affecting user adoption of an information technology, and it is not appropriate for this research that focuses on trust. In addition, this research integrated trust transference into the model and explored the transference from trust in platforms to trust in AIGC.
3. Research model and hypotheses
The research model is shown in Figure 1.
Perceived intelligence
Perceived intelligence reflects the ability of generative AI to understand natural language input from users, provide instant responses and output effective results (Moussawi et al., 2021). Previous studies have found that perceived intelligence significantly impacts users’ intention to use AI (Moussawi et al., 2023). When a generative AI platform demonstrates high intelligence during interactions with users and generates logically coherent and comprehensive information in response to user input, users may perceive the platform as professional and capable of solving problems. Consequently, users form positive evaluations towards the platform and engender cognitive trust in the platform. In contrast, lack of intelligence within a generative AI platform will affect users’ beliefs in its capabilities. Therefore, we suggest:
Perceived transparency
Perceived transparency can be defined as the extent to which a platform reveals the internal operation processes of algorithms to users and enables them to better understand the mechanism of contents generation (Diakopoulos and Koliska, 2017). Many users do not understand the operational mechanisms behind the generated contents when using generative AI platforms, which may lead to the confusion and suspicion during their use. Previous studies have found that the information transparency of e-commerce platforms can reduce perceived risk (Zhou et al., 2018), and high transparency in social networking platforms can encourage users’ self-disclosure (Pu et al., 2022). Similarly, if a generative AI platform can enable users to understand more about the operation mechanism of content generation and display more identifiable information, users will feel more control and engender cognitive trust in the platform. Thus, we propose:
Knowledge hallucination
Knowledge hallucination means that the contents generated by the AI are untrue or unrelated to the training data set (Mo et al., 2023). The generation of hallucinated false information is mainly related to the quality of the pre-trained data set and its sources. Generative AI sometimes outputs answers that seem reasonable but are incorrect or even absurd. These hallucination phenomena are often misleading and it may lead to the misuse of erroneous information without careful examination by users. These errors may let users to doubt the quality of generated contents and the algorithm capabilities of the platform, which may increase their anxiety (Yang and Khan, 2023) and lower their cognitive trust in the platform. Thus, we state:
Perceived empathy
The concept of empathy originates from psychology and it refers to the ability to perceive, understand and respond to the thoughts and emotions of others (Wieseke et al., 2012). Previous studies have found that customers prefer interacting with chatbots that express emotions and empathy than those that only provide product information (Liu and Sundar, 2018). This indicates that users have a strong preference for AI with empathetic capabilities. Furthermore, if generative AI shows concerns for users, it will generate contents that are beneficial to users. This may engender users’ trust in the benevolence of the platform. Thus, we posit:
Perceived anthropomorphism
Perceived anthropomorphism refers to the human-like characteristics demonstrated by non-human objects (Moussawi et al., 2023). In this research, anthropomorphism means endowing generative AI with human-like identities, tones, voices and appearances. According to the social response theory, customers are likely to form emotional evaluations based on the anthropomorphic interaction cues received from robots, and these evaluations will influence customers’ social responses to robots (Nass and Moon, 2000). Previous studies have shown that users have a more favourable impression towards robots with high anthropomorphic attributes (Lee and Lee, 2023). For generative AI, anthropomorphic attributes may make users feel that they are communicating with humans, which is likely to develop a sense of closeness and increase their affective trust. Therefore, we suggest:
Cognitive trust and affective trust in platform
Trust consists of cognitive trust and affective trust (Zhang et al., 2014). In the context of generative AI, cognitive trust refers to a user’s perception of the platform’s ability to provide reliable and accurate information, while affective trust refers to a user’s belief that the generative AI platform is concerned with his or her interests (Wang et al., 2016). Previous research suggests that affective trust can be developed from cognitive trust (Zhang et al., 2014). High cognitive trust means that users believe they can obtain credible information from generative AI, which may establish their reliance and affective trust in the platform. Therefore:
The trust transference theory suggests that when the target object is associated with the source object, trust in the source object can be transferred to the target object (Stewart, 2003). In the context of generative AI, users may perceive the generative AI platform as the source object and its generated contents as the target object. User trust, including cognitive trust and affective trust, may be transferred from the platform to the generated contents. Therefore, we argue:
Trust in artificial intelligence-generated content
Trust in AIGC reflects a user’s beliefs that AIGC is trustworthy and reliable. According to the theory of reasoned action, trust can influence behavioural intentions as it affects users’ attitudes (Montano and Kasprzyk, 2015). Trust leads users to perceive AIGC as useful and valuable, thereby facilitating their behavioural intentions. Previous research has shown that trust in information is a significant determinant of the intention to adopt it (Zhang et al., 2019). Thus:
Algorithm bias
Algorithm bias refers to the unfairness in algorithms regarding issues such as gender, race and skin colour (Kerasidou, 2021). The machine learning algorithms may produce unfair outcomes due to incomplete or biased data sources. In addition, the prevalence of human tagging in large model training contributes to the diffusion of algorithm bias in generative AI through a covert manner. When users detect implicit biases or discrimination against specific groups in the generated contents, they may question the fairness and credibility of these contents, which may affect their trust transference. In other words, although users engender cognitive and affective trust in a generative AI platform, algorithm bias may undermine their trust in AIGC. Therefore, we suggest:
4. Method
The research model includes 10 constructs and each of them was measured with multiple indicators. To ensure the content validity, these indicators were adapted from the existing literature and revised based on the context of generative AI. A pre-test was conducted among 20 users that had experience using generative AI. Then, according to their feedback, a few indicators were refined to enhance the readability and accuracy. Table 2 lists the measurement items and their sources. All items were measured using a five-point Likert scale, which ranges from strongly disagree (1) to strongly agree (5).
Data were collected using the Credamo platform. Respondents were asked to complete the questionnaire based on their experience using generative AI. They were also encouraged to forward the questionnaire to their friends to expedite the data collection process. We scrutinized all responses and dropped a few, such as those with the same answer for all questions. As a result, 504 valid responses were obtained. Among them, 48.2% were male and 51.8% were female. Over half of them (56.2%) were between 21 and 30 years old. A majority (88.3%) held associate degree or above. The frequently-used generative AI platforms included ChatGPT (38.3%), ERNIE Bot (30.4%) and Ali Qianwen (27.4%).
5. Results
Structural equation modelling
Reliability and validity.
Firstly, we conducted a confirmatory factor analysis to examine the reliability and validity (Khan et al., 2024). As shown in Table 3, the alpha coefficients of all constructs are greater than 0.70, indicating good reliability. In addition, the factor loadings, the composite reliabilities (CR) and the average variance extracted (AVE) values of all constructs exceed 0.7, 0.7 and 0.5, respectively, indicating good convergent validity. As listed in Table 4, the square root of the AVEs is larger than the correlation coefficients, suggesting good discriminant validity (Khan et al., 2022).
Hypothesis testing.
Secondly, we used AMOS 28 to analyse the structural model and test the hypotheses. The results are shown in Figure 2. Except H5 and H10b, other hypotheses were supported. The explained variance of each endogenous construct is 36% (cognitive trust in platform), 32% (affective trust in platform), 31% (trust in AIGC) and 27% (intention to adopt AIGC). In addition, the actual values of fit indices are better than the recommended values, showing a good model fit.
Fuzzy-set qualitative comparative analysis
Structural equation modelling (SEM) focuses on examining the “net effects” of independent variables on the dependent variable and has not considered the complex relationships between variables. Based on the set theory, fuzzy-set qualitative comparative analysis (fsQCA) can identify the configurations of antecedent variables that influence the outcome variable. Thus, this research adopts fsQCA to examine the antecedent configurations influencing the intention to adopt AIGC.
Firstly, the measurement items for each antecedent variable were averaged, and data calibration was conducted according to the standards of 5%, 95% and the crossover point 50% (Ragin, 2009). Subsequently, a necessity analysis was performed, and the results show that the consistency values of all antecedent variables are below 0.9. This indicates that none of the antecedent variables are the necessary conditions for the intention to adopt AIGC. Thus, it is appropriate to perform a configuration analysis. The results are listed in Table 5. ● indicates the presence of a core condition, • indicates the presence of a peripheral condition, ⊗ indicates the absence of a peripheral condition and a blank indicates that the condition is optional.
This research identified three configurations that trigger the intention to adopt AIGC. (1) S1. The path shows that when knowledge hallucination and algorithm bias are at low levels, and perceived intelligence, transparency, empathy, cognitive trust, affective trust and trust in AIGC are at high levels, user adoption intention is strong, with anthropomorphism playing an optional role. This result is similar to that of SEM, which indicated the insignificant effect of perceived anthropomorphism. (2) S2. The path is similar to S1. Comparing both paths, we can find that perceived empathy and perceived anthropomorphism can substitute for each other. However, the unique coverage rate of S2 (0.018) is significantly lower than that of S1 (0.039). This indicates that compared to perceived anthropomorphism, perceived empathy has a larger impact on user adoption intention. Thus, users may pay more attention to the empathetic responses rather than the human-like interactions when considering adopting AIGC. (3) S3. The path shows that when there is low knowledge hallucination, users may neglect algorithm bias in generative AI when other factors are at high levels. This configuration has the raw and unique coverage that is similar to S1. Thus, when the stimulus and organism factors meet users’ expectations, they may not be concerned with algorithm bias when determining their adoption of AIGC.
6. Discussion
The results shown in Figure 2 indicate that, in the direct effects, all paths are significant except for the insignificant effect of perceived anthropomorphism on affective trust in platform. In terms of moderation effects, algorithm bias did not moderate the relationship between affective trust in platform and trust in AIGC. The fsQCA reveals that cognitive trust and affective trust in platform are the common core conditions for the three configurations.
Among the antecedents of cognitive trust in platform, knowledge hallucination has a larger effect on cognitive trust (β = −0.37, p < 0.001), followed by perceived transparency (β = 0.22, p < 0.001) and perceived intelligence (β = 0.15, p < 0.05). This indicates that knowledge hallucination is the main factor influencing users’ cognitive trust in a generative AI platform. The fsQCA results show that these three variables are the common peripheral conditions for three paths. Compared to intelligence, knowledge hallucination makes the AIGCs confusing (Ji et al., 2023). If users find that the generated contents seem accurate but are actually incorrect or meaningless, they may doubt the professional competence of a generative AI platform. This may further undermine their cognitive trust in the platform. On the other hand, transparency, as a way for the platform to showcase its operational mechanisms to users, can familiarize users with its algorithm principles, which may engender their cognitive trust in the platform.
The results indicate that perceived empathy has a significant effect (β = 0.29, p < 0.001) on affective trust in platform. Generative AI not only serves as a tool for users to seek knowledge and ask questions, but also possesses certain social attributes due to its ability to communicate with users (Liu and Sundar, 2018). When generative AI expresses positive emotions, such as support and encouragement, users are likely to feel satisfied and engender their affective trust. We did not find the effect of perceived anthropomorphism on affective trust in platform. This is inconsistent with previous research (Song and Luximon, 2020). This may be because that, although human-like features may help develop emotional connections such as intimacy and familiarity, they do not affect user trust beliefs in benevolence. In addition, the effect of perceived empathy on affective trust may overshadow that of perceived anthropomorphism, which leads to the insignificant impact of perceived anthropomorphism.
In terms of organism factors, cognitive trust in platform significantly influences affective trust in platform (β = 0.34, p < 0.001), which indicates that affective trust can be developed from cognitive trust. This is consistent with a previous result (Shao et al., 2023). Furthermore, both cognitive trust in platform and affective trust in platform influence trust in AIGC (β = 0.30, p < 0.001), which suggests that trust in the platform can be transferred to trust in the generated contents. Trust in AIGC significantly influences user adoption intention (β = 0.58, p < 0.001). This result is in line with previous research, which has reported the effect of trust on user acceptance of AI virtual assistants (Xiong et al., 2024) and tourist acceptance of ChatGPT (Xu et al., 2024). The fsQCA results suggest that trust in AIGC is a common peripheral condition for three paths. This is consistent with the SEM results.
The results show that algorithm bias negatively moderates the relationship between cognitive trust in platform and trust in AIGC, which means that algorithm bias attenuates the transference from cognitive trust in platform to trust in AIGC. With respect to sensitive topics, such as race, gender and skin colour, when users identify unfairness in the related contents, they may be cautious with the generated contents. The results did not find the moderation effect of algorithm bias on the relationship between affective trust in platform and trust in AIGC. This may be because when users detect algorithm bias, they rely more on rational analysis rather than emotional judgment to establish trust in AIGC. Therefore, algorithm bias does not moderate the effect of affective trust on trust in AIGC.
7. Theoretical and managerial implications
From a theoretical perspective, this research makes three contributions. Firstly, existing research has mainly used technology adoption theories, such as UTAUT, to examine AIGC user behaviour, and has identified the effect of factors, such as performance expectancy, effort expectancy and privacy concerns, on AIGC user adoption. This research revealed the significant impact of trust on AIGC user adoption intention. The results increase our understanding of AIGC user behaviour. Secondly, the results demonstrate that the features of a generative AI platform, including perceived intelligence, perceived transparency, knowledge hallucination and perceived empathy, influence users’ cognitive and affective trust in the platform, both of which in turn affect their trust in AIGC and adoption intention. The results disclose the effect mechanism of trust on AIGC user adoption. Thirdly, algorithm bias negatively moderates the effect of cognitive trust in a platform on trust in AIGC, which indicates the moderation effect of algorithm bias on trust transfer. This result also extends extant research on trust transference.
The results imply that generative AI platforms need to take measures to engender user trust and facilitate user adoption of AIGC. Firstly, they need to improve the AIGC quality and mitigate knowledge hallucination. They may optimize the algorithm to ensure the reliability and offer the information source to users for validation. Secondly, generative AI platforms need to filter out the biased or discriminated information. They should use quality data during model training. They may also adopt manual review or allow user reporting to reduce biased information. Thirdly, they need to enhance the transparency. Generative AI platforms can disclose algorithm principles and operation processes to increase user understanding and trust. Fourthly, they need to be concerned with empathy. Generative AI platforms can establish affective trust by providing personalized concerns and emotional support to users.
8. Conclusion
Based on the SOR, this research investigated the impact of trust on user adoption intention of AIGC. The results indicated that perceived intelligence, perceived transparency and knowledge hallucination influence cognitive trust in platform, whereas perceived empathy influences affective trust in platform. Both cognitive trust and affective trust in platform lead to trust in AIGC. Algorithm bias negatively moderates the effect of cognitive trust in platform on trust in AIGC. The results highlight the need to engender user trust to facilitate user adoption of AIGC.
This research has a few limitations. Firstly, generative AI is developing rapidly and integrated with different industries, such as education, health care and finance. Future research may investigate user trust in these industry AI products. Secondly, this research mainly examined the effect of generative AI features on trust. Future research could explore the effect of other factors, such as culture and personality traits on user trust. Thirdly, trust is dynamically evolving. This research primarily used cross-sectional data. Future research could collect longitudinal data to examine trust development and its effect on user adoption. Fourthly, our sample is mainly composed of young people with high education. Although they represent the majority of AIGC users, our results need to be generalized to other samples such as middle-aged and elderly users.
This work was supported by National Social Science Foundation of China (24BGL310).
Figure 1.Research model
Figure 2.Path coefficients and significance
Table 1.
A few popular LLMs
| LLM name | Company | Country | Type | No. of parameters | Functions |
|---|---|---|---|---|---|
| ChatGPT | OpenAI | USA | Close source | 20 billion | Multimodality including text, |
| Gemini | USA | Close source | 137 billion | Multimodality | |
| Llama | Meta | USA | Open source | 405 billion | Multimodality |
| ERNIE bot | Baidu | China | Close source | 260 billion | Multimodality |
| Qianwen | Alibaba | China | Open source | 110 billion | Multimodality |
| Hunyuan | Tencent | China | Close source | 100 billion | Multimodality |
Source: Authors’ own work
Table 2.
Constructs and measurement items
| Construct | Items | Content | Source |
|---|---|---|---|
| Perceived intelligence (IN) | IN1 | Generative AI is capable of completing the tasks submitted by users | Priya and |
| IN2 | Generative AI has rich knowledge | ||
| IN3 | Generative AI is smart | ||
| Perceived transparency (TR) | TR1 | Generative AI shows users how it generates contents | Calderon et al. (2023) |
| TR2 | Generative AI explains the process of generating contents to users | ||
| TR3 | Generative AI makes its operation process clear to users | ||
| Knowledge hallucination (KI) | KI1 | The contents provided by generative AI may be fabricated | Zha et al. (2018) |
| KI2 | The contents provided by generative AI may be unreliable | ||
| KI3 | The contents provided by generative AI may be false | ||
| Perceived empathy (EMP) | EMP1 | Generative AI can understand users’ specific needs | Fu et al. (2023); |
| EMP2 | Generative AI usually gives personalized attention to users | ||
| EMP3 | Generative AI cares users’ interests | ||
| Perceived anthropomorphism (AN) | AN1 | Generative AI is much like humans | Balakrishnan |
| AN2 | Generative AI has consciousness | ||
| AN3 | Generative AI feels lifelike rather than artificial | ||
| Algorithm bias (AB) | AB1 | Generative AI algorithms may show bias or discrimination with some users | Shin (2021) |
| AB2 | The data sets processed by generative AI algorithms may contain bias | ||
| AB3 | Generative AI algorithms may not follow fair procedures when generating contents | ||
| Cognitive trust in platform (CT) | CT1 | Generative AI platforms are trustworthy | Wang et al. |
| CT2 | Generative AI platforms are honest | ||
| CT3 | Generative AI platforms are reliable | ||
| Affective trust in platform (AT) | AT1 | Generative AI platforms make me feel secure | |
| AT2 | Generative AI platforms make me feel comfortable | ||
| AT3 | Generative AI platforms make me feel satisfied | ||
| Trust in AIGC (TIA) | TIA1 | AIGC is reliable | Liu and Tao (2022) |
| TIA2 | AIGC is credible | ||
| TIA3 | Overall, I can trust AIGC | ||
| Intention to adopt AIGC (AI) | AI1 | I intend to use AIGC in work or life | Al-Debei and |
| AI2 | I expect to use AIGC in the future | ||
| AI3 | I am willing to recommend AIGC to others |
Source: Authors’ own work
Table 3.
Reliability and validity
| Construct | Item | Loading | Alpha | CR | AVE |
|---|---|---|---|---|---|
| Perceived intelligence (IN) | IN1 | 0.721 | 0.821 | 0.823 | 0.608 |
| IN2 | 0.812 | ||||
| IN3 | 0.803 | ||||
| Perceived transparency (TR) | TR1 | 0.835 | 0.846 | 0.846 | 0.647 |
| TR2 | 0.778 | ||||
| TR3 | 0.799 | ||||
| Knowledge hallucination (KI) | KI1 | 0.809 | 0.831 | 0.832 | 0.622 |
| KI2 | 0.778 | ||||
| KI3 | 0.779 | ||||
| Perceived empathy (EMP) | EMP1 | 0.777 | 0.808 | 0.808 | 0.584 |
| EMP2 | 0.769 | ||||
| EMP3 | 0.746 | ||||
| Perceived anthropomorphism (AN) | AN1 | 0.712 | 0.788 | 0.789 | 0.555 |
| AN2 | 0.768 | ||||
| AN2 | 0.754 | ||||
| Algorithm bias (AB) | AB1 | 0.767 | 0.809 | 0.890 | 0.586 |
| AB2 | 0.758 | ||||
| AB3 | 0.772 | ||||
| Cognitive trust in platform (CT) | CT1 | 0.811 | 0.841 | 0.841 | 0.638 |
| CT2 | 0.792 | ||||
| CT3 | 0.794 | ||||
| Affective trust in platform (AT) | AT1 | 0.792 | 0.829 | 0.829 | 0.619 |
| AT2 | 0.803 | ||||
| AT3 | 0.764 | ||||
| Trust in AIGC (TIA) | TIA1 | 0.798 | 0.813 | 0.813 | 0.592 |
| TIA2 | 0.750 | ||||
| TIA3 | 0.760 | ||||
| Intention to adopt AIGC (AI) | AI1 | 0.822 | 0.843 | 0.843 | 0.642 |
| AI2 | 0.788 | ||||
| AI3 | 0.793 |
Source: Authors’ own work
Table 4.
Correlation matrix of constructs
| Construct | IN | TR | KI | EMP | AN | AB | CT | AT | TIA | AI |
|---|---|---|---|---|---|---|---|---|---|---|
| IN | 0.780 | |||||||||
| TR | 0.489** | 0.804 | ||||||||
| KI | −0.534** | −0.520** | 0.789 | |||||||
| EMP | 0.412** | 0.427** | −0.502** | 0.764 | ||||||
| AN | 0.372** | 0.289** | −0.324** | 0.420** | 0.745 | |||||
| AB | −0.338** | −0.404** | 0.437** | −0.325** | −0.251** | 0.766 | ||||
| CT | 0.402** | 0.443** | −0.498** | 0.521** | 0.407** | −0.359** | 0.799 | |||
| AT | 0.426** | 0.439** | −0.498** | 0.445** | 0.324** | −0.378** | 0.498** | 0.787 | ||
| TIA | 0.419** | 0.368** | −0.485** | 0.424** | 0.366** | −0.360** | 0.424** | 0.429** | 0.770 | |
| AI | 0.447** | 0.429** | −0.451** | 0.422** | 0.350** | −0.304** | 0.464** | 0.498** | 0.472** | 0.801 |
Note:**p < 0.01
Source: Authors’ own work
Table 5.
Configuration analysis results
| Conditional variables | Intention to adopt AIGC | ||
|---|---|---|---|
| S1 | S2 | S3 | |
| Perceived intelligence | ● | ● | ● |
| Perceived transparency | ● | ● | ● |
| Knowledge hallucination | ⊗ | ⊗ | ⊗ |
| Perceived empathy | ● | ● | |
| Perceived anthropomorphism | ● | ● | |
| Algorithm bias | ⊗ | ⊗ | |
| Cognitive trust in platform | ● | ● | ● |
| Affective trust in platform | ● | ● | ● |
| Trust in AIGC | • | • | • |
| Consistency | 0.946 | 0.945 | 0.945 |
| Raw coverage | 0.322 | 0.301 | 0.323 |
| Unique coverage | 0.039 | 0.018 | 0.040 |
| Solution consistency | 0.937 | ||
| Solution coverage | 0.380 | ||
Source: Authors’ own work
© Emerald Publishing Limited.
