Content area
The importance of self-regulation as an essential element for lifelong learning calls for the design of learning processes that promote it. In this context, peer assessment, characterised by promoting metacognitive reflection and guiding students in modifying their learning strategies during the process, is considered a key and effective element for its development. This research studies the effects of implementing peer feedback strategies on the development of the competence of learning to learn. The aim is to improve self-regulation in Spanish higher education students, specifically Master’s students in an online learning environment at an open university. The main objective of this contribution is to determine if the active involvement of students (111) in peer assessment (in the role of assessor or assessed) is confirmed as an effective self-regulation strategy. To do this, a self-regulation questionnaire was administered at the beginning and end of the experience, as well as a satisfaction questionnaire regarding the peer assessment experience. The results highlight that through the implemented peer feedback strategies, students specifically improved their capacity to deeply analyse tasks and clearly visualise objectives, which are elements related to the initial planning phase of self-regulation. The conclusions point to the need for students to take responsibility for self-regulation, and the opportunity that technology can provide in supporting student self-regulation, motivation, and participation in assessment in online learning environments.
Introduction
Education and lifelong learning are pressing needs in our current society, and the competence of learning to learn is a necessary objective for its attainment (Cano, 2023). Consequently, the development and assessment of this competence have become a challenge faced by Higher Education institutions. We understand the competence of learning to learn as the ability to continue and persist in learning and organising it. This involves effective time management, individual and group information processing, as well as awareness of one’s own needs and cognitive processes as a learner (Martín & Moreno, 2007). Self-regulation is an essential element in this competence, and therefore, it is necessary to design learning processes that promote it (Lluch & Cabrera, 2023). A self-regulated learning process is an “active and constructive process through which students establish goals for their learning, and then monitor, regulate, and control their cognition, motivation, and behaviour, guided and constrained by their goals and contextual features in the environment” (Pintrich, 2000, p.453). Self-regulation or the capacity for self-regulated learning is a critical element for achieving good learning outcomes (Dunn et al., 2014) and requires an intentional and systematic developmental process that must be appropriately planned and designed (Cano, 2014).
Peer feedback has an important potential for the learning of important competencies, namely self-regulated learning and critical thinking, and the design of the peer feedback process is crucial for its success (Zeng & Ravindran, 2025). However, it is still not completely clear how to ensure peer feedback success, and a risk of dissatisfactory results remains. The purpose of this study is to contribute to the research into student self-regulation by addressing the research question of how peer assessment can support student self-regulation. The specific study objectives and research questions are presented in Section “Materials and methods” (Table 1).
Student self-regulation and peer feedback
Following the conceptualisation of Pintrich (2000) and Boekaerts and Corno (2005), self-regulation has a set of components or dimensions: cognitive, which refers to the development of new knowledge through strategic planning and the execution of learning tasks; metacognitive, in which knowledge about one’s cognitive processes is used to establish strategies for regulating and controlling learning; and motivational, affective, and emotional, which help create conducive situations for learning to learn through self-concept, self-efficacy, intrinsic and extrinsic motivation, among other strategies. Furthermore, the development of the self-regulation process (Zimmerman, 2000, 2008) consists of three phases: the planning or preparation phase of learning, the execution or monitoring phase for self-observation and self-control, and the reflection phase associated with self-judgement and self-reaction.
Peer assessment is considered an effective strategy for developing students’ self-regulation capacity (Boud, 2000; Lodge et al., 2018; Panadero, 2017). Opportunities for dialogue are necessary for guided reflection and improvement, not only of the outcome or product but also of the learning process itself (Lluch & Cabrera, 2023). In this context, assessment should not only be formative (aimed at promoting learning), but also self-regulatory (aimed at improving the cognitive, metacognitive, and motivational elements of the learning process). According to Broadbent et al. (2020), peer feedback can promote metacognitive reflection and guide students in modifying strategies throughout learning.
In that sense, feedback is a key component of formative assessment and one of the most important elements for improving students’ performance and learning (Hattie & Timperley, 2007; Winston et al., 2017). While a more conventional view considered feedback as mere information provided by an agent (teachers, peers, or oneself), the constructivist conceptualisation focuses on the process and emphasises the role of students in using feedback for improvement (Carless & Boud, 2018). This “new paradigm” of feedback (Winstone & Carless, 2019) places students at the centre of the assessment process through strategies such as peer assessment or self-assessment. In this new context, the concept of internal feedback refers to the perceptions that students generate when comparing their current knowledge and competence with the external information they receive (e.g., the work of their peers) (Nicol, 2021).
Feedback aims to improve students’ understanding, competency development, and other capacities such as study habits, motivation, or self-regulation (Sadler, 2010), all to enhance their progress and academic performance (Hattie & Timperley, 2007). However, many studies indicate that students often do not find the feedback useful. Different challenges need to be addressed in the design of peer feedback processes, i.e. issues with the required cognitive processes, affective and social barriers, little or no belief in the usefulness of peer feedback or the trust in peer feedback (Zeng & Ravindran, 2025). Different recent studies have focused more in detail on some of these challenges, for example, the risk of peer feedback to endanger friendships (Özbek et al., 2024). Greisel et al. (2025) refuted the hypothesis that beliefs and orientations affect the quality of peer feedback, but confirmed that receptivity and the recognition of the importance of peer feedback influence perceived feedback adequacy. Fareed Lahza et al. (2025) compared two scaffolding strategies, scripting and self-monitoring, finding a more positive impact of scripting on immediate feedback quality and critical perspectives. However, they suggest that self-monitoring might have long-term advantages. Teachers need to design learning activities and assessment processes in which students can put the received feedback into practice. In other words, information should not arrive after the process but as something that occurs during learning (Boud & Molloy, 2012; Henderson et al., 2019) to prompt action. Therefore, feedback is most useful when it has an impact on future performance, allowing at least one attempt to use the information received to improve learning (Lui & Andrade, 2022), and it should be timely, descriptive, constructive, specific, and focused on specific learning outcomes and assessment criteria (Andrade & Brookhart, 2016; Shute, 2008).
Online learning environments require a higher level of self-regulated learning (SRL) compared to more traditional classroom settings. Therefore, finding ways for online students to develop self-regulated learning is crucial when they are learning in virtual environments (Broadbent et al., 2020). Due to the importance of self-regulated learning for academic success and lifelong learning, teachers need to be proactive in ensuring that online learning environments, particularly in educational technologies, are promoting and enhancing the self-regulation capacity (Poitras & Lajoie, 2018).
Interventions based on new technologies used to support and foster SRL in online environments often adopt two approaches. Firstly, some educational technologies provide direct instructions on how to acquire and develop SRL with the primary purpose of helping students learn how to regulate their learning. Secondly, other technologies that are already integrated into online learning environments support and promote SRL while students complete learning tasks. Students can use these technologies to plan their learning activities, self-monitor, collaborate with their peers, or self-assess their learning outcomes (Broadbent et al., 2020). It is important to note that when learning technologies are deliberately used to support self-regulation, student motivation, engagement, and academic performance significantly improve (Kitsantas et al., 2015).
Materials and methods
This study examines the effects of implementing peer feedback strategies on the development of the competence of learning to learn, using monitoring technologies to support the learning process and enhance self-regulation capacity.
To achieve this, a sequence was designed (Lluch & Cano, 2023) and implemented in the Moodle Learning Management System (LMS) to promote self-regulated learning based on Zimmerman’s cyclical model. This sequence was contextualised within a complex learning task, involving the activation of multiple knowledge areas, the management of various disciplinary contents, and the execution of different competencies. Additionally, the task needed to have a duration that allowed for peer assessment, be iterative (with multiple submissions or loops to implement feedback in subsequent versions), and possess a certain level of complexity (i.e., emphasising productive rather than reproductive aspects for students). It was also important that the assessment criteria be relevant and that there be specific criteria for assessing the quality of peer feedback.
In this context, the main objective of the research is to determine if the active involvement of students in peer assessment enhances their capacity to learn how to learn and therefore proves to be an effective self-regulation strategy. Based on this overall objective, the following specific questions are posed (Table 1):
Table 1. Alignment between the study objectives, research questions, and the data analysis implemented
Study objectives | Research questions | Data analysis |
|---|---|---|
(1) Development of students’ capacity for learning to learn | To what extent does the capacity for learning to learn increase among students who have participated in a peer assessment experience? | Factorial analysis and comparison of pre-test (SR-Q pre-test) and post-test (SR-Q post-test) self-regulation questionnaire data. |
(2) Role of the assessor and the role of the assessed in the development of students’ self-regulation | To what extent does the role of the assessor (being an assessor) or the role of the assessed (being the one assessed) in the peer assessment process enhance students’ capacity for learning to learn? | Comparison of post-test self-regulation questionnaire (SR-Q post-test) and student satisfaction questionnaire (satisfaction-Q) data in their roles as assessor and assessed. |
(3) Utility of provided and received feedback for academic performance improvement | How does the quality of provided and received feedback influence academic performance? | Comparison of student satisfaction questionnaire (satisfaction-Q) data in their roles as assessor and assessed, and the grade received for both the sequence and the overall subject. |
(4) Students’ motivation for the development of their self-regulation capacity | How does students’ motivation influence the development of their capacity for learning to learn? | Comparison of overall satisfaction data with the peer assessment experience in the subject and post-test self-regulation questionnaire (SR-Q post-test) results. |
(5) Influence of students’ motivation on the peer assessment process | How does students’ motivation influence the peer assessment process? | Comparison of overall satisfaction data with the peer assessment experience in the subject and satisfaction results of students in their roles as assessor and assessed (satisfaction-Q). |
Participants
The study was conducted within the subject “Fundamentals of Techno Pedagogical Design” in the first year of the Master’s in Education and ICT at the Open University of Catalonia. The educational model of this university is entirely developed online, including the teaching and learning process as well as the assessment. It primarily caters to adult students, the vast majority of whom have professional responsibilities, and over half of them have family commitments. 111 students took part in the experience, 44 women and 67 men.
Data collection
The questionnaire developed by Panadero and Alonso Tapia (2024) was used to analyse the development of self-regulation capacity. It was administered at the beginning of the experience (SR-Q pre-test) and once it was completed (SR-Q post-test). Cronbach’s alpha was calculated to assess the internal reliability of the instrument used in this study, yielding a value of 0.8895204. This result indicates high internal reliability, suggesting that the items in the instrument are consistent and effectively measure the same construct. The self-regulation questionnaires (referred to as SR-Q pre-test or SR-Q post-test) included 34 items, which participants were asked to rate on a Likert scale ranging from 1 to 5 (see Table 2 below).
Furthermore, a student satisfaction questionnaire (hereinafter referred to as satisfaction) was administered to all students regarding their perception of how the feedback they gave or received affected their engagement in the learning process and the development of the competence of learning to learn. The questionnaire included the following items to be rated by students on a Likert scale from 1 to 5:
Assessing my classmates’ tasks (i.e., being an assessor) has allowed me to engage more in my learning process (Satisfaction-Q1).
Receiving the opinions, assessments, and advice from my classmates (i.e., being the one assessed) has allowed me to engage more in the learning process (Satisfaction-Q2).
Assessing my classmates’ tasks (i.e., being an assessor) has allowed me to contribute to the development of the competence of learning to learn (Satisfaction-Q3).
Receiving the opinions, assessments, and advice from my classmates (i.e., being the one assessed) has allowed me to contribute to the development of the competence of learning to learn (Satisfaction-Q4).
Assessing my classmates’ tasks (i.e., being an assessor) has taught me how to provide feedback (Satisfaction-Q5).
Receiving the opinions, assessments, and advice from my classmates (i.e., being the one assessed) has taught me how to provide feedback (Satisfaction-Q6).
Overall satisfaction with the peer assessment experience (Satisfaction-Q7).
Data analysis
For the comparison of the results from SR-Q (pre and post-test), a Wilcoxon test was undertaken to compare the paired samples (both individual items and scores of the seven factors in the pre and post datasets) (Result 1; see Table 1).
For the factorial analysis, also carried out using SR-Q, an Oblimin rotation was employed. The reliability of the factors was measured using Cronbach’s alpha (Result 1; see Table 1).
To compare the factor scores between the two datasets of SR-Q (pre and post), the factor scores technique was used to calculate the factor scores for each subject in the pre and post datasets, based on the factor loadings obtained from the Exploratory Factor Analysis (EFA) in the pre dataset (Result 1; see Table 1).
The Kruskal-Wallis test was used to analyse significant relationships between variables of SR-Q and satisfaction-Q (Result 2, Result 3). The chi-square test was employed to determine the existence of significant associations between two categorical variables (Result 2, Result 3; see Table 1). The following categorisation was used for the correlation analysis:
Between 0 and 0.10: non-existent correlation.
Between 0.10 and 0.29: weak correlation.
Between 0.30 and 0.50: moderate correlation.
Between 0.50 and 1.00: strong correlation.
To determine which independent variables of a questionnaire significantly influence another dependent variable (also between SR-Q post and satisfaction-Q), a multiple regression analysis was performed using the lm() function to create a linear regression model (Result 4; see Table 1).
Results
Development of students’ capacity for learning how to learn
Firstly, it is important to highlight that there were no significant differences in any of the instruments between men and women.
The most highly rated items both before and after the peer assessment experience were SR-Q1 (pre 4.59; post 4.68), SR-Q4 (pre 4.55; post 4.55), and SR-Q10 (pre 4.48; post 4.48). Specifically, for SR-Q pre, the highest-rated item was SR-Q6 (4.43), and for the post, the highest-rated items were SR-Q28 (4.62), SR-Q12 (4.47), SR-Q14 (4.47), and SR-Q32 (4.43) (Table 2).
Table 2. Items and pre- and post-test mean scores (SR-Q)
Question | Pre-test | Post-test |
|---|---|---|
SR-Q1: I analyse in-depth the task to be performed to make sure I understand what I need to do. | 4.589 | 4.682 |
SR-Q2: I often create diagrams or drawings to represent what I study or the problems I need to solve. | 3.897 | 4.056 |
SR-Q3: When I hear or read a statement or conclusion in class, I think about possible alternatives. | 3.888 | 4.093 |
SR-Q4: Once I understand what I need to do, I try to clearly visualise the steps I need to take and what I need to achieve. | 4.551 | 4.551 |
SR-Q5: I don’t usually organise information in tables or charts while studying because it doesn’t help much in learning. | 1.96 | 1.93 |
SR-Q6: I relate what I’m learning in class to my own ideas. | 4.430 | 4.383 |
SR-Q7: I often discuss ideas or aspects of what I have been studying with my classmates. | 3.841 | 3.673 |
SR-Q8: While doing a task, I check if the steps I am taking are appropriate. | 4.355 | 4.393 |
SR-Q9: Unless requested by the teacher, I don’t usually summarise the texts I study. | 2.449 | 2.402 |
SR-Q10: When studying, I relate the material I read to what I already know. | 4.477 | 4.477 |
SR-Q11: I actively participate in class, asking questions or making comments to the teacher. | 3.654 | 3.841 |
SR-Q12: If the teacher provides a tool to assess whether my approach to a task is correct, I usually use it. | 4.308 | 4.467 |
SR-Q13: When studying for an assessment, I write brief summaries with the main ideas and concepts from the readings. | 4.234 | 4.196 |
SR-Q14: I connect ideas from class with other ideas whenever possible. | 4.336 | 4.467 |
SR-Q15: I seek my classmates’ opinions on how I’m doing on a task. | 3.364 | 3.542 |
SR-Q16: When doing a task, I stop to check if I’m progressing as planned. | 4.131 | 4.280 |
SR-Q17: I usually use different strategies (memorisation, creating diagrams, etc.) when studying different subjects. | 4.215 | 4.196 |
SR-Q18: While studying, I often mentally link the content I’m working on to other subjects. | 4.299 | 4.327 |
SR-Q19: When teachers provide presentations, I take notes on them because it helps me understand them better. | 4.178 | 4.206 |
SR-Q20: After completing a university activity, I review what I’ve done to see if I understood it and if it’s correct. | 4.019 | 4.159 |
SR-Q21: I don’t usually create concept maps to connect the concepts I study because they are of little use. | 2.168 | 2.168 |
SR-Q22: When studying, I look for possible connections between what I study and real-life situations where it could be applied. | 4.215 | 4.393 |
SR-Q23: When something doesn’t go well in an assignment or exam, I ask the teacher for more information on how to improve. | 3.542 | 3.813 |
SR-Q24: Before starting a task, I carefully plan what I need to do. | 4.150 | 4.290 |
SR-Q25: I don’t usually create graphs or diagrams while studying or solving problems because they don’t help me learn. | 2.168 | 2.224 |
SR-Q26: I search for situations where I can apply the content of the course. | 4.019 | 4.215 |
SR-Q27: Whenever possible, I try to discuss ideas or aspects of what I‘ve been studying with my classmates to deepen my understanding. | 3.542 | 3.645 |
SR-Q28: I read the instructions of exercises and exams multiple times to fully understand what is required. | 4.439 | 4.626 |
SR-Q29: Usually, if possible, I create tables to organise the information in texts and problems. | 3.636 | 3.748 |
SR-Q30: Overall, I study by trying to imagine and “visualise” the situations referred to in the content. | 4.159 | 4.299 |
SR-Q31: I believe the topics in this course will be useful for my learning. | 4.065 | 4.327 |
SR-Q32: I believe I will be able to use what I learn in this course in others. | 4.187 | 4.430 |
SR-Q33: I believe I will enjoy the topics covered in this course. | 3.879 | 4.206 |
SR-Q34: Understanding the topics in this course is very important to me. | 4.037 | 4.159 |
Regarding the evolution of competency development through experience, Fig. 1 shows the comparative results of the 34 items of the SR-Q questionnaire for assessing the students’ self-regulation capacity before and after the peer assessment process.
[See PDF for image]
Fig. 1
Didactic sequencebased on the implementation of a peer assessment experience for the development of students'self-regulation competencies
According to the Wilcoxon test, the results show that items SR-Q3, SR-Q11, SR-Q12, SR-Q22, SR-Q23, SR-Q26, SR-Q28, SR-Q30, SR-Q31, and SR-Q32 present significant differences (p < 0.05), indicating statistical evidence of significant differences between these samples. The remaining items show no significant differences (p > 0.05) between the compared samples (Fig. 2).
[See PDF for image]
Fig. 2
Comparison of survey averages
Based on the applied rotation method for Exploratory Factor Analysis (EFA) using Oblimin rotation (Table 3), and subsequently applying Cronbach’s alpha to ensure reliability, the items of the SR-Q self-regulation questionnaire are grouped into seven factors. According to the applied criterion, factors 1, 2, 3, 4, and 6 are reliable. Factor 5 has marginal reliability (0.7819), but it could still be acceptable. However, factor 7 has low reliability (0.3665) (SR-Q20 and SR-Q23), indicating low internal consistency among the items in that factor. Factor 7 (Table 32) groups two items that do not have a direct relationship between them. These are item 20, more related to learning strategies, and item 23, more related to the identification of opportunities for improvement through teacher feedback. This may explain the lack of internal consistency of factor 7.
The following table lists the items associated with each factor and the proposed descriptive title for each factor based on the meaning of the item grouping.
Table 3. Results of the factor analysis (including descriptive titles) by items (SR-Q)
Factors | Items of the SR-Q questionnaire | Explanatory name of the factor |
|---|---|---|
Factor 1 | SR-Q3, SR-Q6, SR-Q10, SR-Q14, SR-Q18, SR-Q22, SR-Q26. | Transfer of learning to other contexts. |
Factor 2 | SR-Q2, SR-Q13, SR-Q17, SR-Q19, SR-Q29 | Proactive use of learning strategies |
Factor 3 | SR-Q30, SR-Q31, SR-Q32, SR-Q33, SR-Q34 | Meaningful and motivating learning |
Factor 4 | SR-Q1, SR-Q4, SR-Q8, SR-Q12, SR-Q24, SR-Q28 | Learning planning and preparation |
Factor 5 | SR-Q7, SR-Q11, SR-Q15, SR-Q16, SR-Q27 | Active and collaborative learning |
Factor 6 | SR-Q5, SR-Q9, SR-Q21, SR-Q25 | Learning strategies are not used intentionally, only reactively |
Factor 7 | SR-Q20, SR-Q23 | – |
From the factor Scores technique (see Table 4), the result of the difference in the average factorial scores between both data sets (pre and post) is very small for each factor. This indicates that the differences between the factor scores in both data sets are practically insignificant and could be attributed to random variations. Therefore, these results suggest that there are no substantial changes in factor scores between the two data sets.
Table 4. Comparison between the SR-Q pre-test and the SR-Q post-test through the scores technique
Factors | Difference between SR-Q pre and the SR-Q post |
|---|---|
Factor 1 | − 0.00000000000000060929325 |
Factor 4 | 0.00000000000000033519073 |
Factor 3 | 0.00000000000000031280145 |
Factor 7 | 0.00000000000000070050404 |
Factor 5 | 0.00000000000000004986925 |
Factor 6 | − 0.00000000000000026653135 |
Factor 2 | 0.00000000000000033569331 |
On the other hand, according to the non-parametric Wilcoxon test, the test results did not show significant differences in any of the seven factors between the pre- and post-data (all p-values > 0.05) (Table 5).
Table 5. Results of the Wilcoxon test
Factors | Wilcoxon test | Significant difference |
|---|---|---|
Factor 1 | 0.8286929 | No |
Factor 4 | 0.8063933 | No |
Factor 3 | 0.7438378 | No |
Factor 7 | 0.7288566 | No |
Factor 5 | 0.7723974 | No |
Factor 6 | 0.8719518 | No |
Factor 2 | 0.8012688 | No |
Therefore, it cannot be concluded that there has been a significant change in the factors between the two moments in time.
The role of the assessor and the role of the assessed in the development of students’ self-regulation.
The role of the assessor (being an assessor)
Regarding the first item (satisfaction-Q1) on the assessor role of students (“Assessing my classmates’ tasks has allowed me to become more engaged in my learning process”), variables SR-Q5, SR-Q11, SR-Q20, SR-Q21, SR-Q23, and SR-Q25 correlate 0. This indicates that there is no relationship between these variables and the involvement of students as assessors in the assessment process. The variable SR-Q14 has the highest correlation with this item (0.5231150), followed by SR-Q30 (0.4698148) and SR-Q34 (0.442445). This indicates a moderate positive relationship between these variables and a greater involvement of the assessor role of students in the learning process. In other words, as these variables increase, the involvement also tends to increase. On the other hand, the variable SR-Q9 has a negative correlation with this questionnaire item (-0.2509649), indicating that as SR-Q9 increases, involvement tends to decrease. The other variables have weak to moderate positive correlations with the involvement of the students as assessors in the assessment process, ranging from 0.2148019 to 0.4036009 (Table 6).
Table 6. Significant correlations between SR-Q items and Satisfaction-Q1: assessing my classmates’ tasks has allowed me to become more engaged in my learning process
Negative correlation | 0 correlation | Positive correlation |
|---|---|---|
SR-Q9: Unless requested by the teacher, I don’t usually summarise the texts I study. (-0.2509649) | SR-Q5: I don’t usually organise information in tables or charts while studying because it doesn’t help much in learning. | SR-Q14: I connect ideas from class with other ideas whenever possible. (0.5231150) |
SR-Q11: I actively participate in class, asking questions or making comments to the teacher. | SR-Q30: Overall, I study by trying to imagine and “visualise” the situations referred to in the content (0.4698148) | |
SR-Q20: After completing a university activity, I review what I’ve done to see if I understood it and if it’s correct. | SR-Q34: Understanding the topics in this course is very important to me. (0.442445) | |
SR-Q21: I don’t usually create concept maps to connect the concepts I study because they are of little use. | ||
SR-Q23: When something doesn’t go well in an assignment or exam, I ask the teacher for more information on how to improve. | ||
SR-Q25: I don’t usually create graphs or diagrams while studying or solving problems because they don’t help me learn. |
With respect to the second item (satisfaction-Q3) on the assessor role of students (“Assessing my classmates’ tasks has allowed me to contribute to the development of the competence to learn how to learn”), variables SR-Q5, SR-Q9, SR-Q11, SR-Q19, SR-Q21, SR-Q23, and SR-Q25 correlate 0. This indicates that there is no relationship between these variables and the competence development of the students as assessors. The variable SR-Q30 has the highest correlation with this item (0.5422676), followed by SR-Q34 (0.4881598) and SR-Q4 (0.4768181). This indicates a moderate positive relationship between these variables and the competence development of the students as assessors. As these variables increase, the development also tends to increase. All other variables have weak to moderate positive correlations with this competence development of the students as assessors, ranging from 0.2061777 to 0.4656333 (Table 7).
Table 7. Significant correlations between SR-Q items and Satisfaction-Q3: assessing my classmates’ tasks has allowed me to contribute to the development of the competence to learn how to learn
Negative correlation | 0 correlation | Positive correlation |
|---|---|---|
SR-Q5: I don’t usually organize information in tables or charts while studying because it doesn’t help much in learning. | SR-Q30: Overall, I study by trying to imagine and “visualise” the situations referred to in the content. (0.5422676) | |
SR-Q9: Unless requested by the teacher, I don’t usually summarise the texts I study. | SR-Q34: Understanding the topics in this course is very important to me. (0.4881598) | |
SR-Q11: I actively participate in class, asking questions or making comments to the teacher. | SR-Q4: Once I understand what I need to do, I try to clearly visualise the steps I need to take and what I need to achieve. (0.4768181) | |
SR-Q19: When teachers provide presentations, I take notes on them because it helps me understand them better. | ||
SR-Q21: I don’t usually create concept maps to connect the concepts I study because they are of little use. | ||
SR-Q23: When something doesn’t go well in an assignment or exam, I ask the teacher for more information on how to improve. | ||
SR-Q25: I don’t usually create graphs or diagrams while studying or solving problems because they don’t help me learn. |
Concerning the third item (satisfaction-Q5) on the assessor role of students (“Assessing my classmates’ tasks has allowed me to learn how to give feedback”), variables SR-Q2, SR-Q3, SR-Q5, SR-Q7, SR-Q9, SR-Q15, SR-Q19, SR-Q21, SR-Q23, SR-Q25, SR-Q27, SR-Q29, and SR-Q31 correlate 0. This indicates that there is no relationship between these variables and the learning of the assessor to provide feedback. On the other hand, the variable SR-Q8 has the highest correlation with this item (0.4580746), followed by SR-Q34 (0.4470919) and SR-Q10 (0.3991435). This indicates a moderate positive relationship between these variables and the learning of the assessor to provide feedback. As these variables increase, the learning also tends to increase. The other variables have weak to moderate positive correlations with this learning of the assessor to provide feedback, with values ranging from 0.2062384 (SSR-Q17) to 0.3698907 (SSR-Q13). This suggests that as these variables increase, the learning also tends to increase, although the relationship is not as strong as with SR-Q8, SR-Q34, or SR-Q10 (Table 8).
Table 8. Significant correlations between SR-Q items and Satisfaction-Q5: assessing my classmates’ tasks has allowed me to learn how to give feedback
Negative correlation | 0 correlation | Positive correlation |
|---|---|---|
SR-Q2: I often create diagrams or drawings to represent what I study or the problems I need to solve. | SR-Q8: While doing a task, I check if the steps I’m taking are appropriate. (0.4580746) | |
SR-Q3: When I hear or read a statement or conclusion in class, I think about possible alternatives. | SR-Q34: Understanding the topics in this course is very important to me. (0.4470919) | |
SR-Q5: I don’t usually organise information in tables or charts while studying because it doesn’t help much in learning. | SR-Q10: When studying, I relate the material I read to what I already know. (0.3991435) | |
SR-Q7: I often discuss ideas or aspects of what I have been studying with my classmates. | ||
SR-Q9: Unless requested by the teacher, I don’t usually summarise the texts I study. | ||
SR-Q15: I seek my classmates’ opinions on how I’m doing on a task. | ||
SR-Q19: When teachers provide presentations, I take notes on them because it helps me understand them better. | ||
SR-Q21: I don’t usually create concept maps to connect the concepts I study because they are of little use. | ||
SR-Q23: When something doesn’t go well in an assignment or exam, I ask the teacher for more information on how to improve. | ||
SR-Q25: I don’t usually create graphs or diagrams while studying or solving problems because they don’t help me learn. | ||
SR-Q27: Whenever possible, I try to discuss ideas or aspects of what I’ve been studying with my classmates to deepen my understanding. | ||
SR-Q29: Usually, if possible, I create tables to organize the information in texts and problems. | ||
SR-Q31: I believe the topics in this course will be useful for my learning. |
The role of the assessed (being the one assessed)
Regarding the first item (satisfaction-Q2) about the role of the assessed (“Receiving the opinions, assessments, and advice from my classmates has allowed me to become more engaged in the learning process”), the variable SR-Q3, “When I hear or read a statement or conclusion in class, I think about possible alternatives” has no significant correlation. The other variables in the post-test SR-Q questionnaire on self-regulated learning have weak to moderate correlations with this item, ranging from 0.2072577 to 0.4122877. This suggests that there may be a positive relationship between these variables and the involvement of the assessed student in the learning process, but the strength of the relationship is relatively weak.
With respect to the second item (Satisfaction-Q4) about the role of the assessed (“Receiving the opinions, assessments, and advice from my classmates has allowed me to contribute to the development of the competence to learn how to learn”), the variable SR-Q2, “I often create diagrams or drawings to represent what I study or the problems I need to solve” has no significant correlation. The other variables in the post-test SR-Q questionnaire on self-regulated learning have weak to moderate correlations with this item, ranging from 0.0000000 to 0.3925090. This indicates a weak to moderate positive relationship between these variables and the item mentioned. It is important to note that a weak correlation can indicate a weak relationship between variables, but it does not necessarily imply direct causality.
Similarly, in the third item about the role of the assessed (satisfaction-Q6) (“Receiving the opinions, assessments, and advice from my classmates has allowed me to learn how to give feedback”), the variable SR-Q2, “I often create diagrams or drawings to represent what I study or the problems I need to solve” has no significant correlation. The remaining variables have weak to moderate correlations, ranging from 0.0000000 to 0.4335430. This indicates a weak to moderate positive relationship between these variables and the learning of the assessed student to provide feedback. Again, it is important to emphasise that a weak correlation can indicate a weak relationship between variables, but it does not necessarily imply direct causality.
Analysing the correlations with different variables (Satisfaction-Q1, -Q2, -Q4, and -Q6), it can be observed that some variables show similar correlations across different contexts. The following variables have relatively similar correlations in the four contexts: SR-Q4 (“Once I understand what I need to do, I try to clearly visualise the steps I need to take and what I need to achieve”), SR-Q8 (“While doing a task, I check if the steps I am taking are appropriate”), SR-Q10 (“When studying, I relate the material I read to what I already know”), SR-Q12 (“If the teacher provides a tool to assess if my approach to a task is correct, I usually use it”), SR-Q13 (“When studying for an assessment, I write brief summaries with the main ideas and concepts from the readings”), SR-Q14 (“I connect ideas from class with other ideas whenever possible”), SR-Q22 (“When studying, I look for possible connections between what I study and real-life situations where it could be applied”), SR-Q30 (“Overall, I study by trying to imagine and ‘visualise’ the situations referred to in the content”), and SR-Q34 (“Understanding the topics in this course is very important to me”). This indicates that the following variables may have a significant influence on different aspects of the study and could be important to consider in future analyses or related studies: involvement of students as assessors in the learning process, involvement of assessed students in the learning process, development of the competence to learn how to learn in assessed students, and learning of the assessed students to provide feedback.
The usefulness of provided and received feedback in improving academic performance
The results of the Kruskal–Wallis test indicate a significant relationship between the variable satisfaction-Q5 (“Assessing my classmates’ tasks (i.e., being an assessor) has taught me how to provide feedback”) and the grade of the activity in which the peer assessment process took place. The chi-square value is 6.2711 with 2 degrees of freedom, and the p-value is 0.04348. Since the p-value is less than 0.05, we can conclude that there is a significant difference in the distribution of scores. This suggests that the variable “activity grade” influences the assessor’s learning to provide feedback. The results show no significant differences or causal relationships in the other satisfaction questionnaire variables (satisfaction-Q1, satisfaction-Q2, satisfaction-Q3, satisfaction-Q4, and satisfaction-Q6).
Regarding the variable “course grade,” there is no correlation.
Students’ motivation for the development of their self-regulation capacity
The multiple regression performed using the lm() model to create a linear regression model, with satisfaction-Q7 as the dependent variable (overall satisfaction with the experience) and all variables from the post-test SR questionnaire as independent variables (students’ self-regulation capacity), shows significant results. The variables SR-Q3 (“When I hear or read a statement or conclusion in class, I think about possible alternatives”), SR-Q7 (“I often discuss ideas or aspects of what I’ve been studying with my classmates”), SR-Q8 (“While doing a task, I check if the steps I’m taking are appropriate”), SR-Q10 (“When studying, I relate the material I read to what I already know”), SR-Q27 (“Whenever possible, I try to discuss ideas or aspects of what I‘ve been studying with my classmates to deepen my understanding”), SR-Q30 (“Overall, I study by trying to imagine and ‘visualise’ the situations referred to in the content”), and SR-Q32 (“I believe I will be able to use what I learn in this course in others”) are statistically significant at a 5% level of significance (p < 0.05). This indicates that there is a relationship between these variables and the dependent variable, and the results are potentially relevant to this study.
The influence of students’ motivation on the peer assessment process
A simple linear regression analysis was conducted to investigate the relationship between the dependent variable satisfaction-Q7 (overall satisfaction with the experience), and the independent variable satisfaction-Q1 (involvement of the students as assessors in the assessment process). The model used is as follows: lm(formula = satisfaction-Q7 ~ satisfaction-Q1, data = df3). The results of the analysis indicate that there is a statistically significant relationship between satisfaction-Q7 and satisfaction-Q1, with an estimated coefficient of 0.5163 and a p-value of 0.00000311 (much lower than the significance threshold of 0.05). This relationship suggests that, generally, as satisfaction-Q7 increases, satisfaction-Q1 also tends to increase.
The linear regression model also shows a significant relationship between overall satisfaction with the experience (satisfaction-Q7) and the following variables: involvement of the assessed student in the learning process (satisfaction-Q2; estimation coefficient of 0.51306); contribution of received feedback to the development of the student’s competence to learn how to learn (satisfaction-Q4; estimation coefficient of 0.4170); and the assessed student’s learning to provide feedback (satisfaction-Q6; estimation coefficient of 0.3502).
Despite these results regarding the relationship between overall satisfaction and perception of the role as an assessor or assessed in the peer assessment process, it is important to note that the Adjusted R-squared to measure variability in the dependent variable is relatively low in all cases. This indicates that the models presented only explain between 14% and 29% of that variability.
Discussion
In this section, the main results of this study are discussed in the light of the consulted literature.
A first aspect to consider is the development of the students’ capacity of learning to learn. According to the perception of the students participating in the experience, the items that best describe them, both before and after the experience, are their ability to deeply analyse the task to understand what they need to do (SR-Q1) and their ability to visualise clearly what they should be doing and achieving (SR-Q4). Both elements are related to the planning phase, during which students become aware of the learning task, organise themselves, and plan the actions they need to take (Zimmerman, 2000). After the peer assessment experience, however, there are differentially valued items that are rated lower in the pre-test SR questionnaire compared to the post-test SR questionnaire, and they also relate to other phases of self-regulation according to the same author: those related to the use of self-assessment tools provided by the teacher to analyse if the task is done correctly or not (SR-Q12) (self-reflection phase); the connection of ideas discussed in class with other ideas (SR-Q14) (execution phase); and the belief in being able to use what was learned in the subject in other contexts (SR-Q32) (self-reflection phase).
Although the results indeed indicate that there have not been significant changes in students’ opinions or attitudes between the beginning and the end of the learning sequence, significant differences are observed between the pre- and post-test SR questionnaires related to critical thinking (SR-Q3), active participation in class (SR-Q11), self-assessment or self-reflection (SR-Q12, SR-Q23), meaningful learning (SR-Q22, SR-Q26, SR-Q30), deep understanding (SR-Q28), and transfer of learning (SR-Q32).
In the factorial analysis, we can explain the meaning of six out of the seven resulting factors. Factor 1. Transfer of learning to other contexts: All items related to this factor are about students’ capacity to consider how to use what they are learning in other contexts or other learning situations. In essence, the factor defines or values the capacity to relate current learning to prior learning and transfer it to future situations.
Factor 2. Proactive use of learning strategies: This factor includes questions related to the specific strategies and actions that students take to learn and optimise their learning.
Factor 3. Meaningful and motivating learning: The items in factor 3 are related to the perceived usefulness of the learning by students and, consequently, to the interest they demonstrate in learning what they are being taught (the subject, the topic, etc.).
Factor 4. Planning and preparation for learning: This factor groups items related to the actions that students take to properly prepare for a task: from understanding what it entails to planning what needs to be done, and reviewing the plan, among other items.
Factor 5. Active and collaborative learning: Questions related to factor 5 are about the active role of students during the learning process, especially the actions they take concerning the teacher (seeking information or clarification) or their peers, mainly focused on seeking feedback to improve the task.
Factor 6. Unintentional use of learning strategies, only reactive: In this case, factor 6 includes items that reflect a passive stance of students towards the intentional use of learning strategies, and in this sense, it is the antithesis of factor 2.
Secondly, the role of the assessor and the role of the assessed in the development of student self-regulation require further discussion. The data analysis indicates a significant difference between the role of the assessor (being an assessor) and the assessed (being the one assessed) in the development of student self-regulation capacity. On the one hand, the results indicate that students who perceive their role as assessor to have a positive influence on their engagement in the learning process are those who relate the ideas discussed in the subject to other ideas (SR-Q14; critical thinking), visualise the situations referred to in the content (SR-Q30; meaningful learning), and are more aware of the importance of learning the subject topics (SR-Q34). Furthermore, students who perceive their role as assessors to have influenced their competence development throughout the peer assessment experience are also those who are aware of the importance of learning the subject topics (SR-Q34) and strive to visualise clearly what needs to be done and achieved (planning; SR-Q4). Additionally, students who perceive their role as assessors to have influenced their learning on providing feedback (digital literacy) are also those who check the relevance of the steps taken during the learning process (SR-Q8; planning and organisation skills), are aware of the importance of the subject topics (SR-Q34), and are capable of relating the content they already know to new content (SR-Q10).
On the other hand, contrary to the role of the assessor, there is no moderate positive correlation between self-regulation items and the engagement of the students in the role of the assessed (being the ones assessed) in the learning process, their competence development, or their learning about giving feedback. This finding is in line with a previous study by Zhang and Mao (2023) which found that students perceived feedback giving as more interesting, more active and insightful than feedback receiving. Further analysis is needed to better understand the nature of these correlations and their impact on this item.
However, in the analysis of correlations with different variables related to the role of the assessed, some variables show similar correlations in different contexts (engagement in the learning process, competence development, and learning about providing feedback or digital literacy): those related to visualising clearly what needs to be done and achieved (SR-Q4), checking if the steps taken during the activity are appropriate (SR-Q8), relating new content to previously acquired knowledge (SR-Q10, SR-Q14), self-assessment through tools provided by the teacher (SR-Q12), writing summaries to relate acquired ideas and concepts (SR-Q13), seeking connections between the object of study and real-life situations in which it could be applied (SR-Q22, SR-Q30), and considering the subject topics as important (SR-Q34).
Thirdly, the usefulness of provided and received feedback for academic improvement is of interest. The variable “activity grade” influences the learning of the assessor on providing feedback. However, there is no correlation with the variable “subject grade.” This is in line with a recent study by Ardill (2025) in which dialogic peer feedback was overall considered very positively by the students, but did not show a clear effect on their grades in comparison to a control group.
Fourthly, student motivation is discussed regarding the development of their self-regulation capacity. The statistical results show that students who were more satisfied with the peer assessment experience are those who, in the post-test SR questionnaire, had a higher self-perception of their self-regulation capacity in relation to their critical thinking (SR-Q3, SR-Q10), collaborative learning (SR-Q7, SR-Q27), planning and organisation skills (SR-Q8), meaningful learning (SR-Q30), and transfer of learning (SR-Q32). This is in line with other studies relating student receptiveness for peer feedback, peer feedback quality and competence development (Zeng & Ravindran, 2025).
Finally, yet importantly, the influence of student motivation on the peer assessment process has to be taken into account. The results show no significant relationship between students’ satisfaction with the peer assessment experience, related to their motivation, and their perception of learning as a result of their role as the assessor (being an assessor) or the assessed (being the one assessed). This adds to previous research showing the nuances in the relationships of aspects intervening in peer feedback processes, for example, Greisel et al.’s study (2025) showing that beliefs and orientations did not influence peer feedback quality, but receptivity and the valuation of peer feedback predicted perceived peer feedback adequacy.
Conclusions
This study highlights the importance of peer assessment as a learning process that promotes self-regulation. The factor analysis identified six key factors that reflect different aspects of students’ self-regulation, such as transfer of learning, proactive use of learning strategies, and meaningful and motivating learning. However, it cannot be stated that there has been a demonstrated increase in students’ self-regulatory competence in its entirety. The results emphasise that participants improved their ability to analyse tasks in-depth and to visualise objectives in a focused way, aspects related to the planning phase of self-regulation.
Regarding the roles of the assessor and the assessed, a critical moment that corresponds to the monitoring phase of the learning process, significant differences were found in their influence on self-regulation. The role of the assessor showed positive correlations with aspects such as critical thinking, meaningful learning, and awareness of the importance of learning the subject topics. However, no significant correlations were found between self-regulation items and the role of the assessed. These findings suggest the importance of continuing to promote the active role of students as assessors and providing opportunities for constructive feedback to their peers. However, it also calls for reinforcing strategies for assessment and feedback literacy, particularly focusing on developing awareness in assessing feedback provided by a peer and managing expectations that this may generate in the students in the role of the assessed.
Lastly, it is worth noting that the elements that have emerged as determinants in the obtained positive results are related to proposing meaningful learning scenarios, accompanying students towards a more active and collaborative role during the learning process, thus promoting their motivation and engagement, and sustaining this process through tasks that foster systematic reflection, decision-making, and continuous improvement.
These results can inspire future research that delves deeper into the active role of students in providing feedback on their own tasks or those of their peers and the factors that influence the use of this feedback as the cornerstone for the development of their self-regulation. In this regard, methodologies that support the development of internal feedback as a strategy to increase students’ levels of self-regulation, such as the comparison methodology proposed by Nicol (2022), could be applied.
This study faces several limitations, namely potential biases in peer assessment, particularly variability in feedback and subjectivity in student perceptions. Further research and the combination of different methods and tools can shed further light on these biases and how to mitigate them.
The main practical recommendations for educators and online course designers on implementing peer assessment to maximise self-regulation in both the assessor and the assessed roles are:
Promote the active role of students as assessors and provide opportunities for constructive feedback to peers (assessor role).
Raise awareness for and support the critical assessment of received peer feedback to support feedback literacy and reinforce the impact of being in the assessed role (assessed role).
Propose meaningful learning scenarios, accompanying students towards a more active and collaborative role during the learning process, thus promoting their motivation and engagement, and sustaining this process through tasks that foster systematic reflection, decision-making, and continuous improvement.
Use methodologies that support the development of internal feedback as a strategy to increase students’ levels of self-regulation, such as the comparison methodology proposed by Nicol (2022).
Foster SR activities, particularly those that relate significantly to a higher overall satisfaction (thinking about alternatives to statements, free peer discussion of ideas drawn from individual learning activities, explicit step planning and revision, linking of new inputs to previous knowledge, visualisation of inputs and application of the learned in one context to another).
Future research should take into account the variables that appeared in this study as potentially having a significant influence on different aspects of the study, namely:
Involvement of students as assessors in the learning process.
Involvement of assessed students in the learning process.
Development of the competence to learn how to learn in assessed students.
Learning of the assessed students to provide feedback.
Acknowledgements
Not applicable.
Author contributions
MFM and LG completed the application of the practice. NCC and MFF analysed the data and obtained the results. All authors contributed equally to the conceptualisation and the writing of the conclusions. All authors read and approved the final manuscript.
Funding
This contribution is made in the framework of the project “Analysis of the effects of the provision of feedback supported by digital monitoring technologies on transversal competences” (reference PID2019-104285GB-I00). Ministry of Science and Innovation (MICINN). National Research Agency (https://www.ub.edu/digital-feedback/es/inicio/).
Data availability
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Declarations
Competing interests
The authors declare that they have no competing interests.
Abbreviations
Exploratory factor analysis
Self-regulated learning
Student satisfaction questionnaire. X is the placeholder of the question number
Self-regulation questionnaire. X is the placeholder of the question number
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Andrade, H., & Brookhart, S. (2016). The role of classroom assessment in supporting self-regulated learning. 10.1007/978-3-319-39211-0_17. In D. Laveault, & L. Allal (Eds.), Assessment for learning: Meeting the challenge of implementation (pp. 293–309). Springer Cham. https://doi.org/10.1007/978-3-319-39211-0
Ardill, N. (2025). Peer feedback in higher education: Student perceptions of peer review and strategies for learning enhancement. European Journal of Higher Education, 1–26. https://doi.org/10.1080/21568235.2025.2457466
Boekaerts, M; Corno, L. Self-Regulation in the classroom: A perspective on assessment and intervention. Applied Psychology: an International Review; 2005; 54,
Boud, D. Sustainable assessment: Rethinking assessment for the learning society. Studies in Continuing Education; 2000; 22, pp. 151-167. [DOI: https://dx.doi.org/10.1080/713695728]
Boud, D; Molloy, E. Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education; 2012; 38, pp. 1-15. [DOI: https://dx.doi.org/10.1080/02602938.2012.691462]
Broadbent, J., Panadero, E., Lodge, J., 3, & de Barba, P. (2020). Technologies to enhance self-regulated learning in online and computer mediated learning environments. In Handbook of Research in Educational Communications and Technology, 37–52. https://doi.org/10.1007/978-3-030-36119-8_3
Cano, E. (2014). Análisis de las investigaciones sobre feedback: aportes para su mejora en el marco del EEES. Bordón66(4), 9–24. https://recyt.fecyt.es/index.php/BORDON/article/view/Bordon.2014.66402
Cano, E. (2023). Prólogo. En Lluch, L. y Cabrera, N. (Eds.). Competencia de aprender a aprender y autorregulación en la universidad. (pp. 9–14). Evaluación entre iguales y propuestas metodológicas para su desarrollo. Barcelona: Octaedro.
Carless, D; Boud, D. The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education; 2018; 43,
Dunn, K; Rakes, G; Rakes, TA. Influence of academic self-regulation, critical thinking, and age on online graduate students’ academic help-seeking. Distance Education; 2014; 35,
Fareed Lahza, H., Demartini, G., Noroozi, O., Gašević, D., Sadiq, S., & Khosravi, H. (2025). Enhancing peer feedback provision through user interface scaffolding: A comparative examination of scripting and self-monitoring techniques. Computers & Education, 230, 105260. https://doi.org/10.1016/j.compedu.2025.105260
Greisel, M; Hornstein, J; Kollar, I. Do students’ beliefs and orientations toward peer feedback predict peer feedback quality and perceptions?. Studies in Educational Evaluation; 2025; 84,
Hattie, J; Timperley, H. The power of feedback. Review of Educational Research; 2007; 77,
Henderson, M; Phillips, M; Ryan, T; Boud, D; Dawson, P; Molloy, E; Mahoney, P. Conditions that enable effective feedback. Higher Education Research & Development; 2019; 38, pp. 1-16. [DOI: https://dx.doi.org/10.1080/07294360.2019.1657807]
Kitsantas, A., Dabbagh, N., Hiller, S., & Mandell, B. (2015). Learning technologies as supportive contexts for promoting college student self-regulated learning. In T. J. Cleary (Ed.), Self-regulated learning interventions with at-risk youth (pp. 277–294). American Psychological Association.
Lluch, L., & Cabrera, N. (2023). Competencia de Aprender a Aprender y autorregulación En La universidad. Evaluación Entre Iguales y propuestas metodológicas Para Su desarrollo. Octaedro.
Lluch, L; Cano, E. How to embed SRL in online learning settings? Design through learning analytics and personalized learning design in moodle. Journal of New Approaches in Educational Research; 2023; 12,
Lodge, J., Panadero, E., Broadbent, J., & de Barba, P. (2018). Supporting self-regulated learning with learning analytics. In J. Lodge, J. Horvath, & L. Corrin (Eds.), Learning analytics in the Classroom. Translating learning analytics research for teachers. Routledge. https://doi.org/10.4324/9781351113038
Lui, A. M., & Andrade, H. L. (2022). Inside the next black box: Examining students’ responses to teacher feedback in a formative assessment context. Frontiers in Education, 7. https://doi.org/10.3389/feduc.2022.751549
Martín, E., & Moreno, A. (2007). Competencia Para Aprender a Aprender. Alianza Ed.
Nicol, D. The power of internal feedback: Exploiting natural comparison processes. Assessment & Evaluation in Higher Education; 2021; 46,
Nicol, D. (2022). Turning Active Learning into Active Feedback, Introductory Guide from Active Feedback Toolkit. Documentation. Adam Smith Business School, University of Glasgow. https://doi.org/10.25416/NTR.19929290
OECD (2002). Definition and selection of competencies (DeSeCo): Theoretical and conceptual foundations. In Summary of the final report: Key Competencies for Succesful Life and Well-Functioning Society. https://www.oecd.org/pisa/definition-selection-key-competencies-summary.pdf
Özbek, T; Daumiller, M; Roshany-Tabrizi, A; Mömke, T; Kollar, I. Friendship or feedback? – Relations between computer science students’ goals, technology acceptance, use of an online peer feedback tool, and learning. Computers in Human Behavior Reports; 2024; 16,
Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology, 8. https://doi.org/10.3389/fpsyg.2017.00422
Panadero, E; Alonso-Tapia, J. ¿Cómo autorregulan Nuestros alumnos? Modelo de Zimmerman sobre estrategias de Aprendizaje. Anales de Psicología; 2014; 30,
Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 451–502). Academic.
Poitras, E. G., & Lajoie, S. P. (2018). Using technology-rich environments to foster self-regulated learning in social studies. In D. H. Schunk, & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (pp. 166–180). Routledge/Taylor & Francis Group. https://doi.org/10.4324/9781315697048-11
Sadler, D. Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education; 2010; 35,
Shute, VJ. Focus on formative feedback. Review of Educational Research; 2008; 78, pp. 153-189. [DOI: https://dx.doi.org/10.3102/0034654307313795]
Winstone, N., & Carless, D. (2019). Designing effective feedback processes in higher education: A Learning-Focused approach. Routledge.
Winstone, N; Nash, R; Parker, M; Rowntree, J. Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist; 2017; 52, pp. 1-21. [DOI: https://dx.doi.org/10.1080/00461520.2016.1207538]
Zeng, X., & Ravindran, L. (2025). Design, implementation, and evaluation of peer feedback to develop students’ critical thinking: A systematic review from 2010 to 2023. Thinking Skills and Creativity, 55, 101691. https://doi.org/10.1016/j.tsc.2024.101691
Zhang, T., & Mao, Z. (2023). Exploring the development of student feedback literacy in the second language writing classroom, Assessing Writing, Volume 55, 2023, 100697. https://doi.org/10.1016/j.asw.2023.100697
Zimmerman, B. J. (2000). Attaining self-regulation. A social cognitive perspective. In M. Boekaerts, P. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 13–39). Academic.
Zimmerman, BJ. Investigating Self-Regulation and motivation: Historical background, methodological developments, and future prospects. American Educational Research Journal; 2008; 45, pp. 166-183. [DOI: https://dx.doi.org/10.3102/0002831207312909]
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.