Abstract

This study explores using a GRPO (Generative Ranking Policy Optimization) reward mechanism combined with RAG (Retrieval Augmented Generation) to train a large language model for creating Tang poetry. The goal was to generate poems that adhere to traditional rules of tone, rhyme, parallelism, and word count while maintaining high artistic quality.The methodology involved building a specialized corpus, using DeepSeek-R1-671B for data distillation, and applying GRPO-based reinforcement learning. Integrating RAG technology further enhanced generation quality.Results showed that the resulting model, Xunzi-Yayun-R1, significantly surpassed the baseline in accurately following poetic rules. This research successfully fuses traditional literary norms with modern generative techniques, providing a viable path for generating other classical texts.

Details

Title
Large language models learning to write rhyming Tang poetry A Xunzi Yayun R1 case study
Author
Zhao, Wenhua 1 ; Wang, Xiyu 1 ; He, Jiacheng 1 ; Zhao, ZhiXiao 1 ; Liu, Chang 1 ; Liu, Liu 1 

 Nanjing Agricultural University, College of Information and Management, Nanjing, China (GRID:grid.27871.3b) (ISNI:0000 0000 9750 7019) 
Pages
519
Publication year
2025
Publication date
Dec 2025
Publisher
Nature Publishing Group
e-ISSN
30593220
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3260938586
Copyright
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.