Abstract

Providing high-quality feedback on student writing is essential yet increasingly difficult due to rising class sizes and limited instructional capacity. Generative AI (GenAI) offers a promising and scalable alternative, but its effectiveness compared to traditional teacher feedback, particularly across different prompting techniques, remains uncertain. This study employed a quantitative, randomized three-group experimental design with 70 graduate students to compare the effects of teacher feedback and GenAI feedback generated using two prompting techniques: Zero-shot and chain-of-thought (CoT). The study explored how these feedback sources affect feedback quality and students’ uptake during essay revision. It involved a two-stage process in which students first wrote an argumentative essay and then revised it based on the feedback received. Feedback and essay quality were evaluated using a rubric based on Toulmin’s model of argumentation and analysed using inferential statistical methods. Results showed that CoT prompting produced higher quality feedback than both Zero-shot prompting and teacher feedback, suggesting that stepwise reasoning in CoT aligns GenAI outputs more closely with the cognitive demands of argumentative writing. However, this higher feedback quality did not lead to significantly greater improvements in revisions. Teacher feedback, although rated lower in quality, resulted in comparable gains in essay quality. In addition, GenAI feedback quality was significantly associated with students’ initial essay quality, whereas teacher feedback quality showed no such association. These findings indicate that feedback quality alone is insufficient to enhance writing outcomes; rather, students’ engagement with and uptake of feedback play a critical role. Overall, the results highlight the potential of hybrid intelligent feedback systems in which teachers support students in interpreting and applying GenAI feedback to meaningfully improve their writing.

Details

Title
Generative AI offers more, but students revise less: comparing the effects of teacher and AI feedback on student essay revisions
Author
Farrokhnia, Mohammadreza 1 ; Latifi, Saeed 2 ; Papadopoulos, Pantelis M. 1 ; Hogenkamp, Loes 1 ; Gijlers, Hannie 1 ; Khosravi, Hassan 3 ; Noroozi, Omid 4 

 Department of Learning, Data Analytics, and Technology, University of Twente, Enschede, The Netherlands (ROR: https://ror.org/006hf6230) (GRID: grid.6214.1) (ISNI: 0000 0004 0399 8953) 
 Department of Educational Technology, Kharazmi University, Tehran, Iran (ROR: https://ror.org/05hsgex59) (GRID: grid.412265.6) (ISNI: 0000 0004 0406 5813) 
 Institute for Teaching and Learning Innovation, The University of Queensland, St Lucia, Australia (ROR: https://ror.org/00rqy9422) (GRID: grid.1003.2) (ISNI: 0000 0000 9320 7537) 
 Education and Learning Sciences Group, Wageningen University & Research, Wageningen, The Netherlands (ROR: https://ror.org/04qw24q55) (GRID: grid.4818.5) (ISNI: 0000 0001 0791 5666) 
Pages
6
Section
Research Article
Publication year
2026
Publication date
Dec 2026
Publisher
Springer Nature B.V.
e-ISSN
23659440
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3301842505
Copyright
© The Author(s) 2026. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.