Content area
Abstract
Research on large language models (LLMs) has focused mainly on the functionalities of these models, and not on how users interact with them. In this thesis, I examine the conversational strategies that users employ when interacting with an LLM-based system, particularly in contrast with other communicative settings. I collect transcripts of conversations between non-expert human users and the system, analyze these transcripts using Conversation Analysis (CA), and identify conversational phenomena that occur in them. I then compare with data and observations in the literature on other modes of conversational interaction, both with computer systems and human interlocutors. I show that non-expert users struggle to understand the functionality of LLM-based systems and that they adopt a distinct set of conversational strategies to attempt to overcome this. These users also often hold mistaken beliefs about these systems and tend to anthropomorphize them, which carries risks such as inducing unwarranted trust in the system and propagating misinformation. Better design choices are needed to help users understand how to use these products and to counteract the tendency toward anthropomorphism.