1. Introduction
The accelerated rhythm of technological advances has caused educational institutions to enter a continuous process of experimentation and generation of opportunities. According to Gartner (Sheehan et al., 2024), the 2035 vision focuses on personalizing the teaching–learning process, putting students at its center, where their individual needs will define the development of their learning path.
Artificial intelligence (AI) has emerged as a point of attention in several aspects of human life. Additionally, with the democratization of generative artificial intelligence (GenAI) through ChatGPT, whose arrival in 2022 marked a turning point for many environments, the teaching–learning process began to transform both inside and outside the classroom. According to UNESCO (2023) and Chang et al. (2024), this tool can act as a personal tutor or a study companion. In particular, its ability to be a creation engine is key to maintaining structured conversations with students (as the ones involved in the feedback process). Also, Angulo Valdearenas et al. (2024) sets some use cases in education to improve learning personalization by means of course adaptation and access to resources in multiple languages, promoting an inclusive and student-centered education. It is therefore imperative to analyze and research the implementation of GenAI in education to understand its impact, scope, and risks.
1.1. Feedback in Education
Feedback is an essential part of the assessment process, it boosts learning and scaffolds autoregulation and self-direction abilities in students (Panadero et al., 2018). It is an active and dynamic process in which students interpret and use information to improve their learning. The concept is inherent to sustainable assessment (Boud & Soler, 2016), as students analyze, discern, and assess what they have accomplished and what needs to be improved. Feedback encourages students’ understanding of discipline standards and improvement plans (Carless, 2013).
From this active perspective, Carless (2013) promotes a dialogical approach where interpretations, meanings, or expectations are shared to give the students the opportunity to understand the standards of the discipline so that they can draw up a plan to achieve them.
Models have been developed to guide professors in this task. Hattie and Timperley (2007) proposed that effective feedback should answer three questions: where am I going? How am I doing it? And what is next? These questions define three levels of feedback: Feed up: Explain what is expected in the activity and how it relates to the learning objectives. Feed back: Analyze the students’ work and indicate what is good and what needs improvement. Feed forward: Suggest concrete actions to improve future deliveries or the understanding of the topic.
Additionally, the authors say that feedback should address the task, the process, self-regulation, and personal attitudes. Aligned with this, Stobart (2018) proposed that effective feedback develops students’ autonomy and autoregulation, going further than just error correction. Their model emphasizes understandable, timely, and future-oriented feedback that fosters students’ deep and continuous learning and empowerment over their learning strategies. Wesolowski (2020) extends this by saying that the key to successful feedback is finding clear and relevant assessment criteria to direct professors during formative and summative assessments.
Such perspectives have taken a more visible place by shifting the focus from teaching to learning (Mendiola & González, 2020; Moreno Olivos, 2016; Stiggins, 2005; Wiliam et al., 2004; Brown, 2005). However, ensuring that students receive meaningful and individualized feedback remains a key challenge, especially in large class sizes (Khahro & Javed, 2022) and when student engagement varies (Carless & Winstone, 2023). These obstacles highlight the need for dynamic and structured feedback systems that promote clarity, encourage student participation, and adapt to different learning contexts to help students take an active role and develop critical thinking abilities about their own learning (Boud & Molloy, 2013). There is no doubt that this will contribute to accelerating learning, optimizing the quality of what is learned, and improving individual and collective achievements, as well as giving them lifelong skills, as Hounsell (2007) states.
1.2. GenAI and Its Use in Education and Feedback
Research on GenAI applied to learning is still in the initial stages, with limited empirical studies addressing its effectiveness like the works of Abdelghani et al. (2023), Xu and Liu (2025), Huesca et al. (2024), or Teng (2025). Furthermore, the role of institutions has also been explored in works like Tran et al. (2024) and Korseberg and Elken (2024).
Some studies have shown that multimodal tools, such as ChatGPT, can increase interaction, accessibility, and effectiveness of learning (Bewersdorff et al., 2025), as well as the capacity for self-regulation and academic performance (Afzaal et al., 2021; Sung et al., 2025), confirming the transformative role of GenAI in education.
Feedback supported by GenAI has been explored from the assessment process, showing that it increases personalization (Y. Zhou et al., 2025; Naz & Robertson, 2024; Güner et al., 2024) and helps professors make this process easier in large groups (Pozdniakov et al., 2024). In this sense, Jiménez (2024) showed that ChatGPT eliminates professors’ time constraints by strengthening student autonomy.
From the students’ point of view, Campos (2025) reveals that students express satisfaction mainly because the tool gives immediate and specific answers about what they should improve in their assignments. This automatization of feedback, which makes it more efficient and timelier, comes from the GenAI natural language functionalities. These features are useful for generating specific real-time feedback that can be adapted to the style and level of each student and help them monitor their performance to improve.
For example, Teng (2024) showed that feedback provided by ChatGPT can improve writing motivation and student autonomy. Also, ChatGPT’s ability to personalize and deliver feedback in a timely manner led Hutson et al. (2024) to conclude in their study that it creates highly responsive, student-centered learning environments that become motivating and rewarding academic experiences. Furthermore, this motivation is a key element for the success of strategies that integrate GenAI. Chu et al. (2025) state that students with higher learning motivation show a more positive attitude when using GenAI for creative tasks.
However, further explorations are needed to understand the full scope of GenAI feedback and to analyze contrasting results. For example, on the one hand, Dai et al. (2024) found that ChatGPT surpassed the laborious feedback activity carried out by professors. On the other hand, Lin and Crosthwaite (2024) concluded that, compared to the feedback provided by ChatGPT when checking written work, professors’ feedback is more consistent, comprehensive, and global.
It is important to pause here to indicate that, although works have taken important steps within GenAI research in education, the state of the art has not yet interlaced the tool with key educational theories. A step forward must be taken in this sense to provide a theoretical basis for the use of GenAI and to extend traditional theories toward the elements needed by technological advances.
In addition, ethical dilemmas and conflicts arise with the use of GenAI. Hagendorff (2024) created a taxonomy for 19 ethical topics exposing issues on elements like fairness and bias, regulation, governance, privacy, authorship, and transparency, or in areas like education and learning, sustainability, or arts. Regarding education and learning, Z. Wang et al. (2025) state that the main causes of students’ unethical behavior are time pressure, challenging courses, and a notable lack of knowledge among professors about these tools. This proves the need for teacher training so that they can transmit the usefulness of the tool and how to focus on its use to, at the same time, avoid overestimation of its features and the feeling that learning through this tool requires little personal effort (Al Murshidi et al., 2024).
To advance the integration of GenAI into the teaching–learning process, it is essential to promote collaboration between professors, researchers, educational institutions, and policymakers. Such an approach will ensure the effective, ethical, and responsible use of these tools, promoting critical thinking and originality among students (Cordero et al., 2024).
This study takes a step forward in this direction by introducing a framework for using GenAI to enrich the feedback process based on educational theory. The specific objectives of this work are (1) to present a methodology for integrating GenAI tools into traditional feedback processes (2) and to present the results of a statistical analysis of students’ perceptions of the feedback received using GenAI compared to the traditional feedback process. This entire work aims to be a guide for educators and institutions to achieve the integration of AI tools into education.
2. The Use of GenAI to Enhance the Feedback Process: A Proposed Methodology
Figure 1 represents the traditional feedback process, where students rely on the professor for feedback at specific times, which can lead to long waiting times.
To explore the impact of GenAI on the feedback process, a methodology that integrates ChatGPT into the teaching–learning process was designed. The approach of this methodology focuses on providing students with timely, iterative, and structured feedback, without replacing the professor’s role in the final evaluation. This methodology is described next.
Step 1: Professors select a topic within the course.
Step 2: Professors design an activity, linked to the selected topic, and clearly define the reasons for carrying it out (motivation); the impact they want to achieve on students (objective); and what students should do (instructions).
Step 3: Professors define the deliverables of the activity and design the instrument for its evaluation. The detailed description of these two aspects facilitates the process of integrating the GenAI tool into the feedback process.
Step 4: Professors build a prompt for the AI feedback tool. Prompts guide the model to generate responses, in a conversational way, that aligns with the user’s intent. Figure 2 shows the structure of the prompt, highlighting the key elements to generate structured feedback. Each of these sections is described as follows: Intention and Context. The personality that the GenAI tool will exhibit is described here. This specification can be made using the Persona Pattern for prompt Engineering (White et al., 2023) by giving instructions like “Act as a person who is an expert on topic x”. Likewise, in this section, the characteristics of the students are presented. It is suggested to indicate the name of the course (as a specification of the domain in which the tool will be deployed) and the level of studies of the course (to indicate the depth to be applied). Other elements can be added to make specifications. The objective of these definitions is to give an initial context to the AI tool. Task Description and Instructions. The activity instructions designed in the previous steps are provided. It is important to mention that the length of the prompts can confuse the tool, so it is suggested to make a precise summary. Additionally, it is important to declare the deliverables to be produced by students. Learning Objectives. This section describes the objectives to be achieved in the activity. This will center the tool’s answer on what should be achieved by students. Evaluation Criteria. The evaluation of the activity is based on previously defined criteria. It is recommended to use rubrics or checklists to delimit the expected results. Having clear criteria is essential to ensure that feedback is understandable and useful for students (Wesolowski, 2020; Brookhart, 2020). It is important to mention that evaluation is the exclusive responsibility of the professors. AI cannot replace the ethical and expert judgment of professors, who have the full context of the student’s performance. Furthermore, only professors can consider emotional and motivational factors in the final evaluation (Burner et al., 2025). On the contrary, the tool cannot apply direct observation of the student’s actions or the development of attitudinal or behavioral components. So, this section is only a reference for the tool AI Behavior and Expectations. The type of interaction that the tool must have with the students, what is allowed, and what is not allowed within this interaction are defined in this section. For example, professors can tell the tool to not provide a direct solution to the task. Patterns such as Question Refinement or Cognitive Verifier (White et al., 2023) can also be applied so the tool can have more precision in its answers. Additionally, a tool’s introduction message could be defined. Feedback Format. A structured format is established to guarantee that the answers provided by the tool are clear, organized, and aligned with pedagogical principles of effective feedback. In this section of the prompt, professors can define elements in the three levels of feedback mentioned by Hattie and Timperley (2007) (feed up, feed back, and feed forward). The Context Manager pattern (White et al., 2023) can be applied here to maintain a fixed response structure to avoid redundant or disorganized information. The perception of feedback influences students’ motivation and commitment, so structuring feedback in a balanced way is essential to maintain their confidence in the learning process (Mayordomo et al., 2022; Van Boekel et al., 2023). So, a balance between positive aspects and areas for improvement should be enforced. This will make the feedback immediate, understandable, and actionable and will avoid infinite loops in the conversation. Additional Guidelines. Other elements can be added to fine-tune the tool’s interactions. For example, to use friendly and pleasant language during conversation.
Step 5: Prompt refinement. Once the prompt is designed, performing iterative tests to ensure it generates useful feedback is needed. It is recommended to use examples of deliverables with diverse levels of quality to observe the tool’s behavior.
Step 6: Publication of the activity. The activity is published along with a description of how to use the AI tool. This guide could include instructions on safeguarding student and third-party data, preventing plagiarism, and upholding academic integrity.
Step 7: Presentation of the AI feedback tool. It is important to present a detailed explanation of the AI tool’s use and advantages to the students. A live demonstration would be useful here.
Step 8: Guidance during the process. Professors must continuously monitor students’ work and the use of the AI tool. Professors must maintain an active presence throughout the process, ensuring that students understand how to leverage AI as an improvement resource and not as a replacement for professor guidance. In the initial stages, this support can focus on promoting the use of the tool, as some students may be skeptical due to a lack of familiarity with the technology or concerns about its impact on learning (X. Zhou et al., 2024).
Step 9: Evaluation and final feedback. Finally, professors must assess the students’ delivery. This step can be used to highlight principal elements of the course that may have been overlooked by the tool. Additionally, a grade can be assigned if required. Figure 3 shows a summary of the proposed methodology.
The following sections present the analysis of this methodology’s implementation.
3. Hypothesis
This research aims to explore whether ChatGPT improves students’ perceptions of their feedback experience compared to traditional teaching–learning. Based on this, the following hypothesis is defined for this work:
A teaching–learning process enhanced with the use of ChatGPT will have a greater positive effect on students’ perceptions of their feedback experience when solving a learning activity, compared to a traditional process with no intervention of the artificial intelligence tool.
4. Materials and Methods
This study applied a between-subjects analysis in an experimental research design to the results of 263 students in courses for undergraduate programs. Students were organized into 7 groups of different disciplines. The characteristics of these groups and the treatment received are described as follows: One group of a course in Discrete Mathematics. A curricular course for undergraduate programs related to Computer Sciences was taught by a team of 2 professors. This group was selected as a focus group (n = 17). The methodology was implemented in a challenge that lasted 5 weeks (the course’s duration). Students had to solve a problem linked to reality. The AI tool was configured to help students create a written report by giving feedback about the organization of the document and how to strengthen its links to the course concepts. This group was taught in a virtual format. Five groups of a course related to Architecture. A curricular course for the Architecture undergraduate program. These groups were all taught by the same professor. Randomly, 2 groups were designed as control groups (n = 27) while 3 groups were designed as focus groups (n = 69). Students created a prompt to create an image to inspire others to commit to fighting global warming and climate change by learning to design zero-carbon buildings. This activity lasted for 4 weeks. These groups were in a face-to-face format. One group of a course related to Biomimicry. This is an elective course for any undergraduate program in the institution. This group was taught by a team of 2 professors in a virtual format. Due to the high enrollment in this course (n = 150) and the fact that it included students from different campuses of the institution, the course was delivered as a single group, with no possibility of splitting students into separate control and focus groups. As part of this study, all students completed two separate learning activities. The first activity, considered the control group implementation, was completed without using any AI tool and covered a different topic, but was equivalent in terms of scope, difficulty, and grading weight to the second activity. The second activity considered the focus group implementation and included access to a customized AI tool that provided formative feedback. This approach was chosen to ensure equitable access to the AI tool for all students, considering the group’s diverse composition and large class size. In the focus group activity, students analyzed a Leadership in Energy and Environmental Design-certified project, a globally recognized certification system for sustainable buildings developed by the U.S. Green Building Council. The AI tool was set up to support students in developing written analysis and an infographic, providing feedback on the structure, clarity, and coherence. In addition, it helped them to strengthen their sustainability analysis and improvement proposals, promoting a more robust and informed approach. The students had almost 4 weeks to develop the activity.
The GenAI tools used were configured as custom Generative Pre-trained Transformers (GPTs) under the GPT4-turbo model of OpenAI ChatGPT, which can handle large contexts and was specifically designed with high conversational capabilities and quality text generation.
An 11-question survey, applied as a pretest and post-test, was designed to collect the degree of satisfaction with feedback. Informed consent was obtained from all subjects involved in this study. This consent was collected in the class in which the activity and tool were presented. In this class, research objectives were also explained by professors before the pretest application. The post-test was administered in the next class after the students submitted their work. An analytical strategy was applied following the next steps: Instrument validation. Experts were consulted and a statistical process was applied. A statistical analysis of the difference in perception of the feedback received between the focus and control groups in the pretest results was conducted to determine if the groups had any difference in their experiences in previous courses. Analysis of the difference in perception of the feedback received between the focus and control groups in the post-test results was conducted by applying an ordinal logistic regression to validate the hypothesis of this work. Students who decided not to participate in this research were not asked to answer the surveys, and their data were not included.
Students were recruited from existing course enrollments, and random assignment was used in courses where multiple sections were available. The Biomimicry group was the only one selected to receive both treatments. Table 1 gives an overview of the sample assignments considering non-usable data.
5. Results
The results of the analytical strategy are presented in this section.
5.1. Instrument Validation
A total of 22 experts were consulted regarding their opinion on whether each item was “essential”, “useful, but not essential”, or “not necessary” to measure students’ perceptions. A Content Validity Ratio (CVR) value was calculated according to Lawshe (1975) and Wilson et al. (2012). The critical value used was 0.418. Questions, experts’ item classification, CVR, and decisions for each item are provided in Table 2.
Before removing any item, the survey’s CVR was 0.455 > 0.418, so the instrument was validated. After removing items, this value was 0.818.
An item was added to the survey given experts’ suggestions: Would you recommend the use of the AI tool to your colleagues or friends? Answers: Yes, No, or I do not know.
The final instrument had the following items: In general, how satisfied are you with the feedback you received for the activity? Did you use the AI tool to receive feedback during the activity? Did the feedback you received in this activity from the AI tool make you realize your areas of opportunity or improvement? Did you use the feedback the AI tool gave you to improve the delivery of your activity? Would you ask the AI tool for feedback again? Would you recommend the use of the AI tool to your colleagues and/or friends?
On the other hand, the instrument’s internal consistency was also analyzed. For this purpose, the post-test results for questions 1, 3, 4, and 5 were used. Question 2 was not considered because if this question is not answered positively, the other questions cannot be answered. Question 6 was not included in the analysis because it includes an external factor (recommendation to another student), while other questions refer to an internal factor (personal experience). Cronbach’s alpha = 0.612 and Composite reliability = 0.631. These values are acceptable for exploratory research according to (Hair Junior et al., 2014).
Item 1 was the only question used in the pretest and was redacted as follows:
“In general, and in your experience, what is the average level of satisfaction you have with the feedback you have received on previous courses?”
In the post-test, only item 1 was presented to the control group students. The complete survey was presented to the focus group students.
For the following analysis, answers were codified into numeric values as follows: (1) Very low, (2) Low, (3) High, and (4) Very high.
5.2. Statistical Analysis of the Difference in Perception of the Feedback Received on Previous Experiences (Pretest Results)
The results in the pretest (N = 201) were used in a median difference analysis between samples. Only for this analysis, a separate sample was created containing the students enrolled in the Biomimicry course given that they received both treatments. Table 3 shows the exploratory analysis, Figure 4 shows the Likert scale chart, and Figure 5 shows the density chart of the samples.
A Levene’s test (Y. Wang et al., 2017) showed homoscedasticity (F = 0.6389, p = 0.529). Given these results, a Kruskal–Wallis rank sum test (Ostertagova et al., 2014) was applied. The results (chi-squared = 5.3851, p = 0.06771) do not show evidence of a difference in the median. These results suggest samples have similar feedback experiences.
5.3. Analysis of the Difference in Perception of the Feedback Received When Solving the Learning Activities Comparing Focus and Control Groups (Post-Test Results)
For this analysis, data from students in the focus group who indicated that they did not use the tool (question 2) were removed. Regarding students in the Biomimicry course, the results of the implementation of the activity without using the AI tool were classified as the control group, while those of the implementation of the activity using the AI tool were classified as focus groups. A total of 252 values were used in this analysis. Table 4 shows the exploratory analysis, Figure 6 shows the Likert scale chart, and Figure 7 shows the density chart of the samples.
These results suggest that the focus group had a better perception of the feedback received.
Next, an ordinal logistic regression (Larasati et al., 2011) was applied. The predictor variable was the treatment received, and the criterion variable was the feedback perception which was found to contribute to the model (estimate = 0.7167, SE = 0.2569, z value = 2.79, p = 0.00527). Threshold coefficients are listed in Table 5. An ANOVA analysis found that this model is different from the null model (LR.stat = 7.9296, df = 1, Pr(>Chisq) = 0.004863). Given the presence of other variables (course modality, the type of activity, or course discipline, for example), other models were tested considering the same criterion variable and a combination of variables as predictors. All of these models indicated that the only significant predictor was the treatment received.
The adjusted predictions plot can be found in Figure 8. This plot shows the incidence of the tool’s use in each of the satisfaction levels.
Students who do not use the tool tend to select the levels that represent low satisfaction. The same is observed for level 3, which represents high feedback perception. However, for level 4 (Very high feedback perception) it was observed that students who use the tool have a greater tendency to experience this level.
5.4. Opinion of Students That Used the GenAI Tool
The results of questions 3, 4, 5, and 6 can be found in Figure 9. Answers from 135 students were collected for this analysis.
6. Discussion
In this work, we found that students who used the GenAI tool had a more positive perception of the feedback process than students who did not use the tool. This highly diverges from Tossell et al. (2024) who said that participants’ expectation of the learning value of ChatGPT did not change after using it during an essay creation activity. This is possible because our methodology creates a conversation between students and the GenAI tool, encouraging them to reflect on the activity’s educational objectives within each partial delivery, thus creating a continuous improvement cycle.
Building such an educational process and avoiding the excessive use of the tool, as stated by H. Wang et al. (2024), is a complex task for professors, given the limited time and number of students they can have. The methodology proposed aims to reduce the workload for professors while creating a structured learning environment to shape a comprehensive learning experience for students. Albdrani and Al-Shargabi (2023) say that this type of environment also stimulates engagement, exploration, explanation, elaboration, personalization, and evaluation cycles.
This ability for personalization was also signaled by Márquez and Martínez (2024), where authors compared the performance of professors and ChatGPT when grading activities. They found that ChatGPT fails to fully identify the quality of deliveries. In this sense, this research shows that a clear methodology contributes to providing a personalized feedback experience despite the limitations of the tool.
Another finding of this work is that a high percentage of students who used ChatGPT (97%) would recommend its use to their classmates or friends. This is consistent with the results of Schmidt-Fajlik (2023), where authors report that this is because ChatGPT gives detailed feedback, and its explanations are easy to understand. The methodology proposed in this work strengthens these characteristics by specifying prompt tuning points to define the type and form of the feedback to be generated. This tuning is highly relevant and allows the AI tool to adapt to different profiles and disciplines. So, the ability to generate detailed explanations that are specifically targeted at achieving educational objectives is an advantage of our methodology over the use of traditional AI tools focused on error detection only. As stated by Kohnke (2024), this type of environment could be useful to develop high-level knowledge and skills.
On the other hand, trust and ethical issues arise regarding the use of ChatGPT. For example, Ngo (2023) explains that, although a high proportion of students agree to have had positive experiences when using ChatGPT for academic purposes in the past, they also have concerns about the credibility of the information provided by the tool. Adding to this, Wecks et al. (2024) state that it is possible that the use that students give to AI tools is correlated with a decrease in their academic results. Hence, as a future work, research on ethical guidance and training in these tools for teachers and students is important.
From another point of view, Stamper et al. (2024) give importance to the creation of frameworks rooted in pedagogical models. Considering this, the framework presented in this work has the advantage of being based on the feedback model of Hattie and Timperley (2007). This recognizes the efforts made in the state of the art and extends the frontiers of knowledge-integrating tools to support the teaching–learning process.
One limitation of this research is the lack of analysis of the impact of learning gains linked to the shift in the feedback process. It is possible that it will be found that, although students have a positive experience with the tool, their learning gains do not show a significant improvement, as reported in the study by Sun et al. (2024). Linked to this, the fact that the methodology showed positive results despite the heterogeneity of the groups is an advantage of this work. However, it can also be recognized as a limitation, given that, to explore the results in greater depth, similar experiments could be conducted in groups from the same discipline or with the same type of delivery format, for example.
The results of this research showed that 98% of the students who used the GenAI tool were able to identify their areas of opportunity and that 81% used feedback to improve their work. These characteristics exhibit self-regulation and self-direction skills development in students, which is a fundamental responsibility of institutions and professors, as mentioned by Falconí et al. (2024). Furthermore, 96% of students said they would use the tool again, which aligns to research where students perceive GenAI feedback as more comprehensive (Allen & Mizumoto, 2024), detailed (Guo & Wang, 2024), attractive, and less intimidating (Allen & Mizumoto, 2024).
Additionally, students indicated that they would like to continue using the tool in other activities. This agrees with Boud (2015) who established that the feedback process must be directed by the students as people endowed with decision and action.
Finally, there is an opportunity and future work to identify whether the methodology had an impact on the time that professors could invest in the feedback process. This future study is valuable because teachers recognize that feedback is a key action for learning, but it is a time-consuming activity at the same time, as found by Aguayo-Hernández et al. (2024).
A summary of the works analyzed in this section can be found in Table 6.
7. Conclusions
This research work presented a statistical analysis on the comparison of students’ perceptions of feedback provided by a process that integrated a GenAI tool and a traditional process. A total of 263 undergraduate students in Architecture, Biomimicry, and Discrete Mathematics participated in this study. A statistically analyzed and expert-validated survey was used to collect students’ insights in a pretest–post-test process with focus and control groups.
Furthermore, a methodology to enhance the traditional feedback process with ChatGPT, with the aim of achieving the course’s educational objectives, was presented.
It was found that the AI-enhanced feedback process showed a greater positive effect on students’ perceptions of their feedback experience compared to a traditional process, supporting the hypothesis of this work.
This demonstrates that AI tools can be effective enablers that give students a customized and interactive experience. Based on these characteristics, the learning environment created with the help of the GenAI tool shapes an educational process centered on the student. This environment promotes a guiding and supportive role for teachers and allows them to focus their efforts on tasks of greater interest and service to their students.
Another advantage of this methodology is that it builds an autonomous learning environment and fosters student ownership of their learning process. These positive elements are increased with the conversational features of GenAI tools crafting a cyclical and incremental learning process according to their needs. Moreover, the strength of this methodology is that it is an extension of pedagogical models and educational methodologies contained in the state of the art. This characteristic is a requirement that gives solidness to positive results.
On the other hand, reasoning capabilities that go beyond basic probabilistic prediction have recently been integrated into GenAI tools. Future work to analyze these models’ influence on the development of metacognitive skills related to the feedback process would be of interest to extend the methodology presented in this work.
This work opens new perspectives on how different modalities (text, images, and videos) and tools (other than ChatGPT) can be used to improve the teaching–learning process and learning gains. Future work can be centered on exploring the impact of these elements on grades or on achieving students’ learning outcomes by developing, at the same time, skills and conceptual knowledge.
However, it is important to investigate the potential drawbacks and restrictions of implementing these technologies widely, such as ethical concerns, the danger of over-reliance on AI, or potential access restrictions to technology.
Yet a limitation of this work is that it was implemented at the undergraduate level in a few disciplinary fields. Further research is needed to extend this analysis to other fields like Social Sciences or Business and to other levels like elementary or secondary schools.
Finally, the interest generated by the application of artificial intelligence tools in everyday aspects, such as education, raises the need for serious experimentation to clarify and classify their impact on society. This study contributes significantly to this purpose by offering a promising path on how to integrate artificial intelligence into innovative educational processes.
Conceptualization, C.H.A.-H., G.H., M.E.E.-G., R.A.-G. and Y.A.V.-J.; data curation, G.H.; formal analysis, G.H.; investigation, C.H.A.-H., T.G.-B., G.H., M.E.E.-G. and R.A.-G.; methodology, C.H.A.-H., G.H., M.E.E.-G. and R.A.-G.; project administration, G.H.; resources, C.H.A.-H., T.G.-B., G.H., M.E.E.-G. and R.A.-G.; software, T.G.-B., G.H., M.E.E.-G. and R.A.-G.; supervision, G.H.; validation, M.E.E.-G.; visualization, C.H.A.-H., G.H., M.E.E.-G. and R.A.-G.; writing—original draft, C.H.A.-H., T.G.-B., G.H., M.E.E.-G., R.A.-G. and Y.A.V.-J.; writing—review & editing, C.H.A.-H., G.H. and Y.A.V.-J. All authors have read and agreed to the published version of the manuscript.
This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Ethics Research Board of Tecnologico de Monterrey (protocol code CA-EIC-2408-02; 20 August 2024).
Informed consent was obtained from all subjects involved in this study.
The raw data supporting the conclusions of this article will be made available by the authors on request.
The authors would like to acknowledge the support of the Summit AI 2024 event, Tecnologico de Monterrey, Mexico, in the production of this work. The authors would like to acknowledge the pedagogical guidance of Centro de Desarrollo Docente e Innovación Educativa (CEDDIE), Tecnologico de Monterrey, Mexico, during the implementation of this research. The authors would like to acknowledge Caribay Godoy Rangel for proofreading the manuscript.
The authors declare no conflicts of interest.
The following abbreviations are used in this manuscript:
AI | Artificial intelligence |
ANOVA | Analysis of Variance |
CVR | Content Validity Ratio |
GenAI | Generative artificial intelligence |
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 Traditional feedback process.
Figure 2 Prompt structure to provide the AI feedback tool with a set of statements to describe the activity and the tool behavior.
Figure 3 Proposed methodology to integrate generative artificial intelligence into the feedback process described by (
Figure 4 Likert scale chart of the satisfaction about feedback received on previous learning experiences.
Figure 5 Density chart of satisfaction about feedback received on previous learning experiences.
Figure 6 Likert scale chart of the satisfaction about feedback received while solving activities with (focus group) and without (control group) GenAI.
Figure 7 Density chart of satisfaction about feedback received while solving activities with (focus group) and without (control group) GenAI.
Figure 8 Adjusted predictions for the ordinal logistic regression applied to the treatment received (predictor variable) and the feedback perception (criterion variable).
Figure 9 Answers of students that used the GenAI tool for questions: 3. “Did the feedback you received in this activity from the AI tool make you realize your areas of opportunity or improvement?”, 4. “Would you ask the AI tool for feedback again?”, 5. “Did you use the feedback the AI tool gave you to improve the delivery of your activity?”, and 6. “Would you recommend the use of the AI tool to your colleagues and/or friends?”.
Sample distribution considering non-usable data.
Course | Candidate Students | Control Group Usable Data | Focus Group Usable Data |
---|---|---|---|
Discrete Mathematics | 17 | 0 | 16 |
Architecture | 96 | 20 | 51 |
Biomimicry | 150 | 105 | 60 |
Total | 263 | 125 | 127 |
Questions, expert classification, and Content Validity Ratio (CVR) values for items in the survey to collect students’ perceptions of the feedback received when solving a learning activity. For each item, the decision about keeping, removing, or adding it is shown.
# | Question | Type of Answer | Essential | Useful, But | Not Necessary | Content Validity | Decision |
---|---|---|---|---|---|---|---|
1 | What is your level of satisfaction with the feedback you received for the activity? | Very low, Low, | 20 | 2 | 0 | 0.82 | To keep and modify the wording |
2 | Do you think you had a better feedback experience in this activity than in previous courses or activities? | Yes, No | 14 | 7 | 1 | 0.27 | To remove |
3 | On average, how many times did you request feedback from any entity or person during the activity? | Numeric | 11 | 8 | 3 | 0 | To remove |
4 | Order the following entities with respect to the frequency with which you turned to them to ask for feedback or to resolve doubts during the activity. The first option is the one you used the most and the last one is the one you used the least. Professor. Other students. Friendships. Internet. Artificial intelligence tools. | Order a list of items | 11 | 11 | 0 | 0 | To remove |
5 | During the activity, did you turn to other entities to receive feedback or resolve questions? If yes, write down the entities in the following space separated by commas. | Yes (text), No | 8 | 10 | 4 | −0.27 | To remove |
6 | Did you use the AI tool to receive feedback during the activity? | Yes, No | 20 | 2 | 0 | 0.82 | To keep |
7 | Was the feedback you received in the activity from the AI tool useful in improving your performance? | Yes, No | 21 | 1 | 0 | 0.91 | To remove. Even if this item is validated, experts said that it repeats the same idea of item 8. |
8 | Did the feedback you received in this activity from the AI tool make you realize your areas of opportunity or improvement? | Not at all useful, Not very useful, Useful, Very useful | 21 | 1 | 0 | 0.91 | To keep |
9 | Did you use the feedback the AI tool gave you to improve the delivery of your activity? | Yes, No | 20 | 2 | 0 | 0.82 | To keep |
10 | Would you ask for the AI tool for feedback again? | Never, Sometimes, Frequently, Always | 19 | 3 | 0 | 0.73 | To keep |
11 | Indicate three characteristics that you value or liked about the feedback and/or use of the AI tool. | Text | 11 | 11 | 0 | 0 | To remove |
Exploratory analysis of satisfaction with feedback received on previous learning experiences.
Value | Focus Group | Control Group | Both Treatments |
---|---|---|---|
N | 70 | 23 | 108 |
Min. | 2.00 | 2.00 | 1.00 |
1st qu. | 3.00 | 3.00 | 3.00 |
Median | 3.00 | 3.00 | 3.00 |
Mean | 3.06 | 3.30 | 3.23 |
3rd qu. | 3.00 | 4.00 | 4.00 |
Max. | 4.00 | 4.00 | 4.00 |
Standard deviation | 0.56 | 0.56 | 0.61 |
Exploratory analysis of the satisfaction about feedback received while solving activities with (focus group) and without (control group) GenAI.
Value | Focus Group | Control Group |
---|---|---|
N | 127 | 125 |
Min. | 3.00 | 1.00 |
1st qu. | 3.00 | 3.00 |
Median | 3.00 | 3.00 |
Mean | 3.49 | 3.27 |
3rd qu. | 4.00 | 4.00 |
Max. | 4.00 | 4.00 |
Standard deviation | 0.50 | 0.60 |
Threshold coefficients for the ordinal logistic regression applied to the control and focus groups.
Threshold | Estimate | Std. Error | z Value |
---|---|---|---|
1|2 | −4.5273 | 0.7168 | −6.316 |
2|3 | −3.4069 | 0.4253 | −8.010 |
3|4 | 0.7314 | 0.1905 | 3.839 |
Comparison of works analyzed in the discussion section.
Reference | Strategy | Number of Students | Application Context | Discipline | Type of Study | Results |
---|---|---|---|---|---|---|
This work | ChatGPT as an extension tool for the feedback process | 263 | Undergraduate students | Discrete Mathematics, Architecture, Biomimicry | Qualitative and quantitative. Pretest–post-test. Control and focus groups. | There is a significant difference in students’ perceptions between those who used GenAI and those who did not. |
ChatGPT for an essay creation activity | 24 | Air force academy | Design | Quantitative. Pretest–post-test. | No significant difference in perception of the value of the tool. | |
ChatGPT for writing | 69 | University | Japanese writing | Qualitative | ChatGPT is highly recommended by students because of its detailed feedback and its explanations are easy to understand. | |
GenAI for writing | 14 | University | English for academic purposes | Qualitative | The in-depth explanations of GenAI tools are useful for students to have a better understanding of the feedback. | |
GenAI for programming | 82 | University | Programming for Educational Technology majors | Quantitative. Control and focus groups. | Positive students’ acceptance of ChatGPT with no significant difference regarding learning gains. | |
ChatGPT for Internet of Things course | 20 | University | Computer Sciences | Qualitative and quantitative. Control and focus groups. | Positive perception of a personalized learning experience. | |
ChatGPT for academic purposes | 200 | University | Information Technology | Qualitative | A total of 86% of students presented high satisfaction when using ChatGPT in previous academic experiences. | |
Comparison between feedback provided by a professor and ChatGPT | No data | University | Psychology | ChatGPT identifies partially activities’ quality, and grades differed between both actors. ChatGPT presented the ability to personalize feedback. | ||
Detection of the use of GenAI in writing essays | 193 | University | Financial Accounting | Quantitative. Control and focus groups. | Students who used GenAI ranked lower in the final exam. |
Abdelghani, R.; Sauzéon, H.; Oudeyer, P. Y. Generative AI in the classroom: Can students remain active learners?. arXiv; 2023; arXiv: 2310.03192
Afzaal, M.; Nouri, J.; Zia, A.; Papapetrou, P.; Fors, U.; Wu, Y.; Li, X.; Weegar, R. Explainable AI for data-driven feedback and intelligent action recommendations to support students’ self-regulation. Frontiers in Artificial Intelligence; 2021; 4, 723447. [DOI: https://dx.doi.org/10.3389/frai.2021.723447]
Aguayo-Hernández, C. H.; Sánchez Guerrero, A.; Vázquez-Villegas, P. The learning assessment process in higher education: A grounded theory approach. Education Sciences; 2024; 14,
Albdrani, R. N.; Al-Shargabi, A. A. Investigating the effectiveness of ChatGPT for providing personalized learning experience: A case study. International Journal of Advanced Computer Science & Applications; 2023; 14,
Allen, T. J.; Mizumoto, A. ChatGPT over my friends: Japanese English-as-a-Foreign-Language learners’ preferences for editing and proofreading strategies. RELC Journal; 2024; 00336882241262533. [DOI: https://dx.doi.org/10.1177/00336882241262533]
Al Murshidi, G.; Shulgina, G.; Kapuza, A.; Costley, J. How understanding the limitations and risks of using ChatGPT can contribute to willingness to use. Smart Learning Environments; 2024; 11,
Angulo Valdearenas, M. J.; Clarisó, R.; Domènech Coll, M.; Garcia-Brustenga, G.; Gómez Cardosa, D.; Mas Garcia, X. Com incorporar la IA en les activitats d’aprenentatge; Repositori Institucional (O2) Universitat Oberta de Catalunya: 2024; Available online: http://hdl.handle.net/10609/151242 (accessed on 12 January 2025).
Bewersdorff, A.; Hartmann, C.; Hornberger, M.; Seßler, K.; Bannert, M.; Kasneci, E.; Kasneci, G.; Zhai, X.; Nerdel, C. Taking the next step with generative artificial intelligence: The transformative role of multimodal large language models in science education. Learning and Individual Differences; 2025; 118, 102601. [DOI: https://dx.doi.org/10.1016/j.lindif.2024.102601]
Boud, D. Feedback: Ensuring that it leads to enhanced learning. The Clinical Teacher; 2015; 12,
Boud, D.; Molloy, E. Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education; 2013; 38,
Boud, D.; Soler, R. Sustainable assessment revisited. Assessment & Evaluation in Higher Education; 2016; 41,
Brookhart, S. M. Feedback and measurement. Classroom assessment and educational measurement; Routledge: 2020; 63.
Brown, S. Assessment for learning. Learning and Teaching in Higher Education; 2005; (1), pp. 81-89.
Burner, T.; Lindvig, Y.; Wærness, J. I. “We Should Not Be Like a Dinosaur”—Using AI Technologies to Provide Formative Feedback to Students. Education Sciences; 2025; 15,
Campos, M. AI-assisted feedback in CLIL courses as a self-regulated language learning mechanism: Students’ perceptions and experiences. European Public & Social Innovation Review; 2025; 10, pp. 1-14.
Carless, D. Sustainable feedback and the development of student self-evaluative capacities. Reconceptualising feedback in higher education; Routledge: 2013; pp. 113-122.
Carless, D.; Winstone, N. Teacher feedback literacy and its interplay with student feedback literacy. Teaching in Higher Education; 2023; 28,
Chang, C. Y.; Chen, I. H.; Tang, K. Y. Roles and research trends of ChatGPT-based learning. Educational Technology & Society; 2024; 27,
Chu, H. C.; Lu, Y. C.; Tu, Y. F. How GenAI-supported multi-modal presentations benefit students with different motivation levels. Educational Technology & Society; 2025; 28,
Cordero, J.; Torres-Zambrano, J.; Cordero-Castillo, A. Integration of Generative Artificial Intelligence in Higher Education: Best Practices. Education Sciences; 2024; 15,
Dai, W.; Tsai, Y. S.; Lin, J.; Aldino, A.; Jin, H.; Li, T.; Gašević, D.; Chen, G. Assessing the proficiency of large language models in automatic feedback generation: An evaluation study. Computers and Education: Artificial Intelligence; 2024; 7, 100299. [DOI: https://dx.doi.org/10.1016/j.caeai.2024.100299]
Falconí, C. A. R.; Figueroa, I. J. G.; Farinango, E. V. G.; Dávila, C. N. M. Estrategias para fomentar la autonomía del estudiante en la educación universitaria: Promoviendo el aprendizaje autorregulado y la autodirección académica. Reincisol; 2024; 3,
Guo, K.; Wang, D. To resist it or to embrace it? Examining ChatGPT’s potential to support teacher feedback in EFL writing. Education and Information Technologies; 2024; 29,
Güner, H.; Er, E.; Akçapinar, G.; Khalil, M. From chalkboards to AI-powered learning. Educational Technology & Society; 2024; 27,
Hagendorff, T. Mapping the ethics of generative AI: A comprehensive scoping review. Minds and Machines; 2024; 34,
Hair Junior, J. F.; Hult, G. T. M.; Ringle, C. M.; Sarstedt, M. A primer on partial least squares structural equation modeling (PLS-SEM); SAGE Publications, Inc.: 2014.
Hattie, J.; Timperley, H. The power of feedback. Review of Educational Research; 2007; 77,
Hounsell, D. Towards more sustainable feedback to students. Rethinking assessment in higher education; Routledge: 2007; pp. 111-123.
Huesca, G.; Martínez-Treviño, Y.; Molina-Espinosa, J. M.; Sanromán-Calleros, A. R.; Martínez-Román, R.; Cendejas-Castro, E. A.; Bustos, R. Effectiveness of using ChatGPT as a tool to strengthen benefits of the flipped learning strategy. Education Sciences; 2024; 14,
Hutson, J.; Fulcher, B.; Ratican, J. Enhancing assessment and feedback in game design programs: Leveraging generative AI for efficient and meaningful evaluation. International Journal of Educational Research and Innovation; 2024; pp. 1-20. [DOI: https://dx.doi.org/10.46661/ijeri.11038]
Jiménez, A. F. Integration of AI helping teachers in traditional teaching roles. European Public & Social Innovation Review; 2024; 9, pp. 1-17.
Khahro, S. H.; Javed, Y. Key challenges in 21st century learning: A way forward towards sustainable higher educational institutions. Sustainability; 2022; 14,
Kohnke, L. Exploring EAP students’ perceptions of GenAI and traditional grammar-checking tools for language learning. Computers and Education: Artificial Intelligence; 2024; 7, 100279. [DOI: https://dx.doi.org/10.1016/j.caeai.2024.100279]
Korseberg, L.; Elken, M. Waiting for the revolution: How higher education institutions initially responded to ChatGPT. Higher Education; 2024; pp. 1-16. [DOI: https://dx.doi.org/10.1007/s10734-024-01256-4]
Larasati, A.; DeYong, C.; Slevitch, L. Comparing neural network and ordinal logistic regression to analyze attitude responses. Service Science; 2011; 3,
Lawshe, C. H. A quantitative approach to content validity. Personnel Psychology; 1975; 28,
Lin, S.; Crosthwaite, P. The grass is not always greener: Teacher vs. GPT-assisted written corrective feedback. System; 2024; 127, 103529. [DOI: https://dx.doi.org/10.1016/j.system.2024.103529]
Mayordomo, R. M.; Espasa, A.; Guasch, T.; Martínez-Melo, M. Perception of online feedback and its impact on cognitive and emotional engagement with feedback. Education and Information Technologies; 2022; 27,
Márquez, A. M. B.; Martínez, E. R. Retroalimentación formativa con inteligencia artificial generativa: Un caso de estudio. Wímb lu; 2024; 19,
Mendiola, M. S.; González, A. M. Evaluación del y para el aprendizaje: Instrumentos y estrategias; Imagia Comunicación: 2020.
Moreno Olivos, T. Evaluación del aprendizaje y para el aprendizaje: Reinventar la evaluación en el aula; Universidad Autónoma Metropolitana: 2016.
Naz, I.; Robertson, R. Exploring the feasibility and efficacy of ChatGPT3 for personalized feedback in teaching. Electronic Journal of e-Learning; 2024; 22,
Ngo, T. T. A. The perception by university students of the use of ChatGPT in education. International Journal of Emerging Technologies in Learning; 2023; 18,
Ostertagova, E.; Ostertag, O.; Kováč, J. Methodology and application of the Kruskal-Wallis test. Applied Mechanics and Materials; 2014; 611, pp. 115-120. [DOI: https://dx.doi.org/10.4028/www.scientific.net/AMM.611.115]
Panadero, E.; Andrade, H.; Brookhart, S. Fusing self-regulated learning and formative assessment: A roadmap of where we are, how we got here, and where we are going. The Australian Educational Researcher; 2018; 45, pp. 13-31. [DOI: https://dx.doi.org/10.1007/s13384-018-0258-y]
Pozdniakov, S.; Brazil, J.; Abdi, S.; Bakharia, A.; Sadiq, S.; Gašević, D.; Denny, P.; Khosravi, H. Large language models meet user interfaces: The case of provisioning feedback. Computers and Education: Artificial Intelligence; 2024; 7, 100289. [DOI: https://dx.doi.org/10.1016/j.caeai.2024.100289]
Schmidt-Fajlik, R. ChatGPT as a grammar checker for Japanese english language learners: A comparison with grammarly and proWritingAid. AsiaCALL Online Journal; 2023; 14,
Sheehan, T.; Riley, P.; Farrell, G.; Mahmood, S.; Calhoun, K.; Thayer, T.-L. Predicts 2024: Education automation, adaptability and acceleration; Garner: 3 December 2024; Available online: https://www.gartner.com/en/documents/5004931 (accessed on 12 January 2025).
Stamper, J.; Xiao, R.; Hou, X. Enhancing llm-based feedback: Insights from intelligent tutoring systems and the learning sciences. International Conference on Artificial Intelligence in Education; Recife, Brazil, July 8–12; Springer Nature: 2024; pp. 32-43.
Stiggins, R. From formative assessment to assessment for learning: A path to success in standards-based schools. Phi Delta Kappan; 2005; 87,
Stobart, G. Lipnevich, A. A.; Smith , J. K. Becoming proficient: An alternative perspective on the role of feedback. The cambridge handbook of instructional feedback; Cambridge University Press: 2018; pp. 29-51.
Sun, D.; Boudouaia, A.; Zhu, C.; Li, Y. Would ChatGPT-facilitated programming mode impact college students’ programming behaviors, performances, and perceptions? An empirical study. International Journal of Educational Technology in Higher Education; 2024; 21,
Sung, G.; Guillain, L.; Schneider, B. Using AI to Care: Lessons Learned from Leveraging Generative AI for Personalized Affective-Motivational Feedback. International Journal of Artificial Intelligence in Education; 2025; pp. 1-40. [DOI: https://dx.doi.org/10.1007/s40593-024-00455-5]
Teng, M. F. “ChatGPT is the companion, not enemies”: EFL learners’ perceptions and experiences in using ChatGPT for feedback in writing. Computers and Education: Artificial Intelligence; 2024; 7, 100270. [DOI: https://dx.doi.org/10.1016/j.caeai.2024.100270]
Teng, M. F. Metacognitive Awareness and EFL Learners’ Perceptions and Experiences in Utilising ChatGPT for Writing Feedback. European Journal of Education; 2025; 60,
Tossell, C. C.; Tenhundfeld, N. L.; Momen, A.; Cooley, K.; de Visser, E. J. Student perceptions of ChatGPT use in a college essay assignment: Implications for learning, grading, and trust in artificial intelligence. IEEE Transactions on Learning Technologies; 2024; 17, pp. 1069-1081. [DOI: https://dx.doi.org/10.1109/TLT.2024.3355015]
Tran, T. M.; Bakajic, M.; Pullman, M. Teacher’s pet or rebel? Practitioners’ perspectives on the impacts of ChatGPT on course design. Higher Education; 2024; [DOI: https://dx.doi.org/10.1007/s10734-024-01350-7]
UNESCO. ChatGPT e inteligencia artificial en la educación superior. La Organización de las Naciones Unidas para la Educación, la Ciencia y la Cultura; 2023; Available online: https://unesdoc.unesco.org/ark:/48223/pf0000385146_spa (accessed on 21 November 2024).
Van Boekel, M.; Hufnagle, A. S.; Weisen, S.; Troy, A. The feedback I want versus the feedback I need: Investigating students’ perceptions of feedback. Psychology in the Schools; 2023; 60,
Wang, H.; Dang, A.; Wu, Z.; Mac, S. Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Computers and Education: Artificial Intelligence; 2024; 7, 100326. [DOI: https://dx.doi.org/10.1016/j.caeai.2024.100326]
Wang, Y.; Rodríguez de Gil, P.; Chen, Y. H.; Kromrey, J. D.; Kim, E. S.; Pham, T.; Nguyen, D.; Romano, J. L. Comparing the performance of approaches for testing the homogeneity of variance assumption in one-factor ANOVA models. Educational and Psychological Measurement; 2017; 77,
Wang, Z.; Yin, Z.; Zheng, Y.; Li, X.; Zhang, L. Will graduate students engage in unethical uses of GPT? An exploratory study to understand their perceptions. Educational Technology & Society; 2025; 28,
Wecks, J. O.; Voshaar, J.; Plate, B. J.; Zimmermann, J. Generative AI usage and academic performance; 2024; Available online: https://ssrn.com/abstract=4812513 (accessed on 4 January 2025).
Wesolowski, B. C. “Classroometrics”: The validity, reliability, and fairness of classroom music assessments. Music Educators Journal; 2020; 106,
White, J.; Fu, Q.; Hays, S.; Sandborn, M.; Olea, C.; Gilbert, H.; Elnashar, A.; Spencer-Smith, J.; Schmidt, D. C. A prompt pattern catalog to enhance prompt engineering with ChatGPT. arXiv; 2023; arXiv: 2302.11382
Wiliam, D.; Lee, C.; Harrison, C.; Black, P. Teachers developing assessment for learning: Impact on student achievement. Assessment in Education: Principles, Policy & Practice; 2004; 11,
Wilson, F. R.; Pan, W.; Schumsky, D. A. Recalculation of the critical values for Lawshe’s content validity ratio. Measurement and Evaluation in Counseling and Development; 2012; 45,
Xu, J.; Liu, Q. Uncurtaining windows of motivation, enjoyment, critical thinking, and autonomy in AI-integrated education: Duolingo Vs. ChatGPT. Learning and Motivation; 2025; 89, 102100. [DOI: https://dx.doi.org/10.1016/j.lmot.2025.102100]
Zhou, X.; Zhang, J.; Chan, C. Unveiling students’ experiences and perceptions of Artificial Intelligence usage in higher education. Journal of University Teaching and Learning Practice; 2024; 21,
Zhou, Y.; Zhang, M.; Jiang, Y. H.; Gao, X.; Liu, N.; Jiang, B. A Study on Educational Data Analysis and Personalized Feedback Report Generation Based on Tags and ChatGPT. arXiv; 2025; arXiv: 2501.06819
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Feedback is an essential component of the teaching–learning process; however, it can vary in quality due to different contexts and students’ and professors’ individual characteristics. This research explores the effect of generative artificial intelligence (GenAI) in strengthening personalized and timely feedback by initially defining an adaptable framework to integrate GenAI into feedback mechanisms defined in theoretical frameworks. We applied a between-subjects analysis in an experimental research design with 263 undergraduate students across multiple disciplines based on an approach consisting of a pretest–post-test process and control and focus groups to evaluate students’ perceptions of artificial intelligence-enhanced feedback versus traditional professor-led feedback. The results show that students who used GenAI declared statistically significantly higher satisfaction levels and a greater sense of ownership in the feedback process. Additionally, GenAI scaffolded continuous improvement and active student participation through a structured and accessible feedback environment, determining that 97% of students are willing to reuse the tool. These findings show that GenAI is a valuable tool to complement professors in the creation of an integrated feedback model. This study draws directions on future research on the combination of artificial intelligence and innovative strategies to produce a long-term impact on education.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 School of Engineering and Sciences, Tecnologico de Monterrey, Mexico City 14380, Mexico
2 Educational Innovation and Digital Learning, Tecnologico de Monterrey, Monterrey 64849, Mexico; [email protected]
3 School of Architecture, Art and Design, Tecnologico de Monterrey, Mexico City 14380, Mexico; [email protected]
4 Academic Vicerectory, Tecnologico de Monterrey, Monterrey 64849, Mexico; [email protected]
5 Digital Enablement and Transformation, Tecnologico de Monterrey, Monterrey 64849, Mexico; [email protected]
6 Food Innovation International Center, Tecnologico de Monterrey, Ciudad Obregón 85010, Mexico; [email protected]