Abstract
Providing high-quality feedback on student writing is essential yet increasingly difficult due to rising class sizes and limited instructional capacity. Generative AI (GenAI) offers a promising and scalable alternative, but its effectiveness compared to traditional teacher feedback, particularly across different prompting techniques, remains uncertain. This study employed a quantitative, randomized three-group experimental design with 70 graduate students to compare the effects of teacher feedback and GenAI feedback generated using two prompting techniques: Zero-shot and chain-of-thought (CoT). The study explored how these feedback sources affect feedback quality and students’ uptake during essay revision. It involved a two-stage process in which students first wrote an argumentative essay and then revised it based on the feedback received. Feedback and essay quality were evaluated using a rubric based on Toulmin’s model of argumentation and analysed using inferential statistical methods. Results showed that CoT prompting produced higher quality feedback than both Zero-shot prompting and teacher feedback, suggesting that stepwise reasoning in CoT aligns GenAI outputs more closely with the cognitive demands of argumentative writing. However, this higher feedback quality did not lead to significantly greater improvements in revisions. Teacher feedback, although rated lower in quality, resulted in comparable gains in essay quality. In addition, GenAI feedback quality was significantly associated with students’ initial essay quality, whereas teacher feedback quality showed no such association. These findings indicate that feedback quality alone is insufficient to enhance writing outcomes; rather, students’ engagement with and uptake of feedback play a critical role. Overall, the results highlight the potential of hybrid intelligent feedback systems in which teachers support students in interpreting and applying GenAI feedback to meaningfully improve their writing.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Department of Learning, Data Analytics, and Technology, University of Twente, Enschede, The Netherlands (ROR: https://ror.org/006hf6230) (GRID: grid.6214.1) (ISNI: 0000 0004 0399 8953)
2 Department of Educational Technology, Kharazmi University, Tehran, Iran (ROR: https://ror.org/05hsgex59) (GRID: grid.412265.6) (ISNI: 0000 0004 0406 5813)
3 Institute for Teaching and Learning Innovation, The University of Queensland, St Lucia, Australia (ROR: https://ror.org/00rqy9422) (GRID: grid.1003.2) (ISNI: 0000 0000 9320 7537)
4 Education and Learning Sciences Group, Wageningen University & Research, Wageningen, The Netherlands (ROR: https://ror.org/04qw24q55) (GRID: grid.4818.5) (ISNI: 0000 0001 0791 5666)




