Abstract

Understanding user-generated content (UGC) is crucial for obtaining actionable insights in domains such as e-commerce and hospitality. However, the noisy and redundant nature of such content present challenges for topic modeling methods like Latent Semantic Analysis (LSA). In this paper, we investigate whether preprocessing user reviews with large language models (LLMs) can improve topic modeling performance. Specifically, we compare two input variants: (1) raw reviews and (2) ChatGPT-generated summaries produced via API as concise keyphrases. We apply LSA with varimax rotation on each variant and evaluate the resulting topic models using multiple criteria, including topic coherence (cυ), average pairwise Jaccard overlap, and cluster compactness via silhouette scores. Unlike prior work that employs LLMs primarily for post hoc topic labeling or interpretation, our method integrates an LLM directly into the preprocessing pipeline to reshape noisy input into structured, standardized summaries. While ChatGPT-based preprocessing results in lower cυ coherence scores likely due to reduced lexical redundancy, it significantly improves topic separation, cluster quality, and topical specificity, leading to more interpretable and well-structured topic models overall.

Details

Title
How Large Language Models Enhance Topic Modeling on User-Generated Content
Author
Bui, Minh Phuoc; Nguyen, Mien Thi Ngoc
First page
012011
Publication year
2025
Publication date
Sep 2025
Publisher
IOP Publishing
ISSN
17426588
e-ISSN
17426596
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3252527904
Copyright
Published under licence by IOP Publishing Ltd. This work is published under https://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.