Content area
Full text
1. Introduction
Generative artificial intelligence (AI) products, such as ChatGPT, have received increasing attention around the world. Powered by large language models (LLMs) and massive data sets, ChatGPT has significantly transformed the pattern of human–machine interaction. Motivated by the success of ChatGPT, numerous generative AI products have emerged in the market, such as Google Gemini, Anthropic’s Claude, Meta Llama, Baidu’s ERNIE Bot and Alibaba’s Tongyi Qianwen. Based on LLMs, AI systems are becoming increasingly human-like and intelligent. They have shown powerful capabilities in natural language processing, conversations, questions and answers, text generation and text translation. Research has shown that generative AI exhibits tremendous potential in various fields, such as enterprise management (Talaei-Khoei et al., 2024), financial decision-making (Oehler and Horn, 2024), healthcare (Howard et al., 2023), science education (Cooper, 2023) and smart libraries (Khan et al., 2023). The widespread application of generative AI in these domains highlights its pervasive influence. In the future, generative AI is likely to gradually penetrate into every aspect of society and people’s lives.
However, despite being a significant breakthrough in AI technology, generative AI also faces numerous challenges, such as knowledge hallucination and biases. Due to the flaws in algorithms and data sources, generative AI often generates false and biased information that contradicts facts. This may undermine users’ trust in both the platform and AI-generated content (AIGC) and decrease their adoption intention of AIGC. The low adoption rate of AIGC may lead to the failure of generative AI platforms in the intensely competitive environments (Lai et al., 2023). Then the research question is how to engender user trust in AIGC? Previous research has noted the effect of trust on continuance intention of ChatGPT (Baek and Kim, 2023), user acceptance of AI virtual assistants (Xiong et al., 2024), students’ intention to use ChatGPT (Rahman et al., 2023) and tourist acceptance of ChatGPT (Xu et al., 2024). However, it has seldom explored the development mechanism underlying user trust in AIGC. This research tries to fill the gap by using the stimulus–organism–response (SOR) to uncover AIGC user trust formation and its effect on user adoption.
Drawing on the SOR, this research examined the impact of trust on user adoption of AIGC. Stimulus reflects the features...





