Abstract

Social bots powered by large language models (LLMs) have infiltrated online platforms, resulting in widespread disinformation, content manipulation, and the erosion of public trust in digital discourse. These AI-powered bots pose a growing cybersecurity threat, particularly on high-traffic platforms like Reddit, where traditional static classifiers can fail to detect increasingly human-like behavior. This praxis proposes a platform-neutral dynamic ensemble detection model to improve the identification of bot accounts and AI-generated bot content using a three-part classifier: logistic regression, a multilayer perceptron (MLP) trained on term frequency inverse document frequency (TFIDF), and an MLP trained on semantic sentence embeddings. The language analysis will be utilized to inform the featuring engineering of a random forest classifier used for bot detection. This research utilizes a curated dataset combining the Human ChatGPT Comparison Corpus 3 Dataset (HC3) with Reddit-derived model-identified bot content harvested through a linguistic and behaviorally filtered detection pipeline. Through the extraction of and analysis of linguistic signals—such as vocabulary diversity, repetition, and readability—the methodology will build a platform specific random forest classifier. The methodology demonstrates that high-fidelity bot detection can be achieved without reliance on externally labeled bot account datasets, enabling cross-platform scalability and robust generalization to evolving synthetic account behaviors.

Details

Title
Modeling and Mitigating Computational Disinformation: A Modular Language-Informed Framework for Cross-Platform Bot Detection
Author
Dinga, Keith  VIAFID ORCID Logo 
Publication year
2025
Publisher
ProQuest Dissertations & Theses
ISBN
9798290939582
Source type
Dissertation or Thesis
Language of publication
English
ProQuest document ID
3238232171
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.