Content area

Abstract

Precisely evaluating text similarity remains a fundamental challenge in Natural Language Processing (NLP), with widespread applications in plagiarism detection, information retrieval, semantic analysis, and recommendation systems. Traditional approaches often suffer from overfitting, local optima stagnation, and difficulty capturing deep semantic relationships. To address these challenges, this paper introduces an Intelligent Text Similarity Assessment Model that integrates Robustly Optimized Bidirectional Encoder Representations from Transformers (RoBERTa) with Chaotic Sand Cat Swarm Optimization (CHSCSO), a novel swarm intelligence-based optimization method inspired by chaotic dynamics. The model leverages RoBERTa’s robust contextual embeddings to extract deep semantic representations while utilizing CHSCSO’s controlled chaotic perturbations to optimize hyperparameters dynamically. This integration enhances model generalization, mitigates overfitting, and improves the trade-off between exploration and exploitation during training. CHSCSO refines the parameter search space by employing chaotic maps, ensuring a more adaptive and efficient training process. Extensive experiments on multiple benchmark datasets, including Semantic Textual Similarity (STS) and Textual Entailment (TE), demonstrate the model’s superiority over standard RoBERTa fine-tuning and conventional baselines that reach cosine similarity scores that are clustered at 0.996. The optimized model achieves higher accuracy and improved stability and exhibits faster convergence in text similarity tasks.

Full text

Turn on search term navigation

© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.