Abstract

Large Language Models demonstrate linguistic abilities on par with humans, able to generate short texts, stories, instructions, and even code that’s often indistinguishable from what is created by humans. This allows humans to use large language models (LLMs) collaboratively — as communication aides or writing assistants.

However, humans cannot always assume an LLM will behave the same way another person would. This is particularly evident in subjective scenarios such as where emotion is involved. In this work, I explore to what depth do LLMs perceive and understand human emotions, and look at ways of describing an emotion to an LLM for collaborative work. First, I study the problem of classifying emotions and show that LLMs perform well on their own, and can also improve smaller models at the same task. Secondly, I focus on generating emotions, using the problem space of keyword-constrained generation and a human participant-study to see where human expectations and LLM outputs diverge and how we can minimize any such misalignment. Here, I find that using English Words and Lexical expressions Valence-Arousal-Dominance (VAD) scales lead to good alignment and generation quality, while Numeric dimensions of VAD or Emojis fare worse.

Details

Title
Connecting Language and Emotion in Large Language Models for Human-AI Collaboration
Author
Choudhury, Shadab Hafiz
Publication year
2025
Publisher
ProQuest Dissertations & Theses
ISBN
9798286449057
Source type
Dissertation or Thesis
Language of publication
English
ProQuest document ID
3225727565
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.