Content area
Quality feedback is essential for supporting student learning in higher education, yet personalized feedback at scale remains costly. Advances in learning analytics and artificial intelligence now enable the automated delivery of personalized feedback to many students simultaneously. At the same time, recent feedback research increasingly emphasizes learner-centered approaches, particularly the role of feedback literacy—students' varying capacities to engage with and benefit from feedback. Despite growing interest, few studies have quantified how feedback literacy affects students' perceptions of feedback, especially in technology-supported contexts. To address this, we examined (1) students' perceptions of personalized, detailed feedback generated via learning analytics and (2) how feedback literacy moderated these perceptions. In a randomized field experiment, teacher education students (N = 196) participated in a week-long computer-supported collaborative learning task on cognitive activation in the classroom. Both groups received automated, personalized feedback: the control group received basic feedback on task completion, while the experimental group received detailed feedback on group processes and the quality of their collaborative statement. The highly informative feedback significantly improved perceptions of feedback helpfulness, enhanced learning insights, and supported self-reflection and self-regulation. Feedback literacy partially moderated these effects, influencing perceptions of feedback helpfulness and motivational regulation.
Introduction
Feedback is a crucial element of effective learning experiences. In their seminal meta-analysis, Hattie and Timperley (2007) described feedback as “one of the most powerful influences on learning and achievement” (p. 81). Although feedback research spans more than a century, much of this time was marked by insufficient attention to what differentiates effective feedback from less impactful forms (Kluger & DeNisi, 1996). Over recent decades, research has illuminated key features of effective feedback, such as clearly communicating standards, providing actionable guidance for improvement, and fostering student ownership of learning (Wiliam, 2018). Additionally, effective feedback should be personalized to meet individual learners’ needs (Henderson et al., 2019a, 2019b) and formative, delivered while students can still act on it to improve their performance (Brookhart, 2018). However, these very features that enhance feedback's impact also make it challenging to scale. In the age of massification of higher education (Ryan & Deci, 2000), thus, providing timely and personalized feedback has become an increasingly demanding task for educators.
However, through advances in learning analytics and methods of artificial intelligence, personalized feedback on a large scale is coming into view. Prominent examples of such advances are learning analytics dashboards. Here, students are presented with data about their learning, which are usually derived from process indicators. Many of these systems aim to enhance student awareness and support self-regulation (Jivet et al., 2018; Matcha et al., 2019). Other analytics-based technologies, such as OnTask, SARA, or the Learning Analytics Cockpit, can support the automated formulation and provision of personalized feedback messages (Pardo et al., 2019; Mousavi et al., 2021; Karademir et al., 2024). Examples of such automated feedback using learning analytics and artificial intelligence can be found for, among others, academic writing (Knight et al., 2020), causal explanations in the domain of biology (Ariely et al., 2024), collaborative learning in teacher education (Weidlich et al., 2024) and, most commonly, programming (see review of Keuning et al., 2018). In this vein, Drachsler (2023) proposed the Highly Informative Learning Analytics program to develop feedback systems that focus on providing insightful, actionable, and pedagogically grounded feedback to students.
Feedback researchers increasingly emphasize the role of the learner in feedback processes. For example, students’ judgments about the utility of the provided feedback influence their uptake of the feedback content, which, in turn, influences whether the intervention leads to enhanced learning (Winstone et al., 2021). A key concept of this learner-centered paradigm is student feedback literacy, which refers to the students’ attitudes toward feedback, their tendencies to engage with feedback, and to their abilities to derive actionable information from feedback (Carless & Boud, 2018). Although numerous conceptualizations of feedback literacy have emerged (Dawson et al., 2024; Molloy et al., 2020; Woitt et al., 2025), little research has quantified the extent to which feedback literacy moderates feedback intervention effects. For example, based on the premise of feedback literacy, it appears intuitive that highly feedback-literate students will appreciate feedback and react to feedback differently than their less feedback-inclined peers. However, this has not yet been researched systematically. Given that such initial divergence in perceptions and appraisals may have downstream consequences for how feedback information is used (Winstone & Nash 2023), this constitutes a crucial research gap. Insights about the moderating role of feedback literacy can help determine the practical relevance of the feedback literacy concept; whether feedback designers should account for this individual difference factor or whether educators should focus on fostering feedback literacy in their students. To address these questions, we conducted a randomized field experiment in a teacher-education setting, in which students randomly received one of two types of automated, personalized feedback through the learning environment after completing an online collaborative learning task.
Literature review
What makes feedback effective?
Feedback is a central component of formative assessment, designed not merely to evaluate student performance but to actively support learning by bridging the gap between current performance and learning goals (Black & Wiliam, 2009; Kluger & DeNisi, 1996; Wiliam, 2018; Winstone & Boud, 2022). By providing information that invites students to reflect, adjust, and improve, feedback fosters a better understanding of progress toward desired outcomes (Winstone et al., 2017). The literature suggests that, aside from information about the task performance outcome (i.e., Was the response correct? And why?), feedback should also target the process level (i.e., Was the approach adequate? What could have been done better?), to help students evaluate and adapt their approaches as needed (Hattie & Timperley, 2007; Wisniewski et al., 2020).
The value of providing students with insights into their learning process is underscored by self-regulated learning theory (Butler & Winne, 1995; Winne & Hadwin, 1998) and feedback researchers are increasingly linking feedback with self-regulated learning (see e.g., Panadero et al., 2019; Panadero, 2023). In particular, the feed-forward function of feedback directs students’ attention to how they can improve future work (Guasch & Espasa, 2015; Hattie & Timperley, 2007), yet students still need to take these steps toward improvement —which requires them to effectively self-regulate (Zimmermann, 2000). Indeed, students’ self-regulation efforts—monitoring, planning, and regulating their learning—have been shown to predict academic achievement in many settings (Dent & Koenka, 2016; Winne & Hadwin, 1998). Such self-regulation is also critical in online learning, where the degree of autonomy places a higher self-regulative burden on students (Broadbent & Poon, 2015; Theobald, 2021). Another way of thinking about self-regulation and feedback is that students are already generating internal feedback on an ongoing basis; that is, they mentally take stock of how they are progressing and where they are falling short, among other parameters. External feedback, that is, feedback provided to them, should complement and calibrate this process (Nicol & McFarlane-Dick, 2006).
Emotional and motivational aspects also play critical roles in feedback processes. Feedback often triggers emotional reactions, which significantly influence how feedback is processed and whether the information is enacted (Rowe, 2017; Molloy et al., 2020). Moreover, motivation—identified by Narciss (2008) as one of the core functions of feedback—shapes students'willingness to engage with and act upon feedback. Feedback can support students’ intrinsic motivation when it affirms competence and respects autonomy but can also be motivationally detrimental if it is perceived as controlling, negative, or uninformative (Ryan & Deci, 2000; Lim et al., 2021; Fong et al., 2019). Moreover, recent research has shown that motivational effects can be complex and nuanced (Wisniewski et al., 2020), for example by increasing both intrinsic and extrinsic motivation to some extent (Weidlich et al., 2024). Whether feedback is perceived as motivating or not may, of course, depend on its design, but it may equally reflect individual differences—such as students’ prior experiences, beliefs about learning, or their feedback literacy—which shape how feedback is interpreted and acted upon.
The reviewed research highlights the formidable task of designing and deploying accurate, actionable, and motivationally beneficial feedback. Yet it is this combination of features which maximizes the chance of feedback supporting students and fostering learning (Henderson et al., 2019b). Thus, it is indispensable for feedback to be tailored to each student. What may be on-point and motivating for one student may be off-target and frustrating for another. Therefore, personalized feedback is paramount. In this paper we define personalized feedback as tailored to the individual student’s performance and learning progress, rather than based on generic task-level information. Effective feedback, in turn, refers to feedback that is timely, understandable, and actionable, and supports student understanding, reflection, and regulation. Given these demands, it is no surprise that educators in higher education often face conditions that make it difficult to provide well-designed personalized feedback to their students, especially in large classes (Henderson et al., 2019a, 2019b; Ryan et al. 2019). Researchers in learning analytics and artificial intelligence in education have highlighted the potential of these rapidly developing technologies to leverage process data and log data to enable personalized feedback at scale.
Toward highly informative feedback systems
The advent of automated feedback systems has opened new avenues for research across domains. Banihashem et al. (2022) and Deeva et al. (2021) documented the expansion of scalable feedback across disciplines ranging from STEM to the arts, while also highlighting fragmentation and a strong emphasis on error identification—particularly in programming. Similarly, Cavalcanti et al. (2021) found that most systems focus on comparing student responses to a desired solution. While valuable, this emphasis limits the potential of feedback systems to support deeper understanding and error correction (Keuning et al., 2018).
Many automated feedback systems still fall short of the “lofty conceptions” of effective feedback (Boud & Dawson, 2023, p. 2). To achieve broader applicability, they must function in less-structured contexts, such as the social sciences. For instance, Menzel et al. (2023) describes a system which provided teacher education students with automated, personalized feedback on collaborative processes in a CSCL task. Such efforts, along with other advances in process-oriented systems (e.g., Ariely et al., 2024; Knight et al., 2020; Pecaric et al., 2017), aim to better align automation with evidence-based feedback principles (Cavalcanti et al., 2021; Tempelaar et al., 2024). These developments are essential for realizing the well-documented power of feedback (Hattie & Timperley, 2007; Wisniewski et al., 2020) for automated systems as well.
Highly informative feedback (HIF) bridges this gap by combining detailed, actionable feedback with scalability, using learning analytics and AI to deliver tailored feedback aligned with pedagogical principles (Drachsler, 2023). Learning analytics and AI, although principally separate research fields, can be combined productively, especially with robust pedagogical grounding (Rientes et al., 2020; Zawacki-Richter et al., 2019). Learning analytics typically involve the extraction and analysis of meaningful patterns from learner-generated data (e.g., activity logs, collaboration patterns), while artificial intelligence encompasses computational methods—such as natural language processing and predictive modeling—that automate the interpretation of these data. Integrating these approaches enables rich and scalable automated feedback. HIF addresses limitations such as the overemphasis on analytics over learning in learning analytics research (Gašević et al., 2015; Guzmán-Valenzuela et al., 2021). For example, Jivet et al. (2018) found many learning analytics dashboards relied on pedagogically shallow data, offering limited insights into latent learning processes (e.g., Susnjak et al., 2022; Wilson et al., 2017).
The HIF approach emphasizes pedagogically grounded analytics interventions that operate at the micro-level of teaching and learning and are rigorously tested in authentic contexts. Crucially, the analytics indicators used must be established in conjunction with the learning activity design, as different learning processes likely require entirely distinct indicators –and this should be planned for a priori. This rationale underpins data-enriched learning activities (DELAs), which are designed to generate meaningful, feedback-relevant data. Examples include concept modeling, CSCL (Menzel et al., 2023), reading (Biedermann et al., 2023), and free-text responses (Gombert et al., 2024). In our study, we used a collaborative DELA focused on discussion and argumentation (see “Learning task”).
While the development of highly informative feedback systems represents a promising advance in scaling personalized feedback, there remains a need to empirically assess how students experience and evaluate such systems. Understanding students'subjective experiences with highly informative feedback is therefore a crucial first step in determining its practical educational value.
For this reason, our first research question was: How effective is highly informative feedback generated by learning analytics according to students’ perceptions? (RQ1).
Feedback literacy
While research and development into automated feedback systems are thriving, the other side of the feedback coin, the student perspective plays, an equally crucial role. Even the best-designed feedback is effective only if students engage with it (Zimbardi et al., 2017). Understanding the how, when, and why of student engagement is therefore critical, and increasingly, individual differences of students are considered in feedback research (Winstone et al., 2017, 2021). However, at this time, few studies have considered how learner characteristics might interact with the design and reception of feedback messages in automated systems. Moreover, there is a lack of quantitative evidence pertaining to the role of feedback literacy in particular.
Initially outlined by Sutton (2012) and further developed by Carless and Boud (2018), the concept of feedback literacy encompasses students’ dispositions and abilities to interpret and utilize feedback to advance their learning. It frames students as proactive learners, varying in their capacity and willingness to use feedback for academic improvement. Feedback literacy can be viewed through two lenses: the sociocultural perspective, which emphasizes development through academic socialization, and the skills perspective, which sees it as a psychological trait and trainable competence (Nieminen & Carless, 2023; Little et al., 2024). Our study adopted the skills perspective, as this positions feedback literacy as a construct amenable to psychometric measurement and, thus, quantitative analysis (Winstone et al., 2019; Woitt et al., 2025).
As a construct coined to capture how students differentially utilize and profit from feedback, feedback literacy can contribute to opening the lamented black box of feedback processing (Winstone & Nash, 2023). Core to the concept of feedback literacy is the assumption that feedback-literate students are able and willing to profit more deeply from feedback than their less feedback-literate peers; as Carless and Boud (2018) stated: “One of the main barriers to effective feedback is generally low levels of student feedback literacy.” (p. 2). Consequently, feedback experiences likely vary based on two factors: feedback quality and individual feedback literacy. For instance, highly feedback-literate students may respond positively to HIF, while those with lower literacy may prefer simpler, performance-focused feedback.
Although many researchers (e.g., Carless & Boud, 2018; Tsai, 2022; Winstone et al., 2021) have emphasized the importance of feedback literacy, its role has yet to be empirically quantified. That is, the extent to which feedback literacy in fact moderates student perceptions is still unclear. Recent efforts to develop feedback literacy scales (Dawson et al., 2024; Song, 2022; Weidlich et al., 2025; Woitt et al., 2025; Yildiz, 2022; Zhan, 2022) now make quantitative investigation possible, particularly as a moderator of feedback perceptions in HIF systems. HIF, with its detailed, personalized feedback, provides an ideal context to explore how feedback literacy affects students’ ability to benefit from feedback. In this study, we focused on students’ initial perceptions of HIF, as these early reactions are pivotal in shaping subsequent feedback processing and actions (Winstone & Nash, 2023). Specifically, we examined how feedback literacy moderates student perceptions of HIF.
We formulated the following second research question: How does student feedback literacy moderate student perceptions of highly informative feedback using learning analytics? (RQ2).
Method
The present study
This study investigates how students perceive highly informative, analytics-based feedback and whether these perceptions are shaped by their feedback literacy. Specifically, we explore whether pedagogically grounded and elaborate feedback is experienced as more helpful, effective, or motivating than simple, less-detailed feedback, and how feedback literacy relates to this difference.
We conducted a randomized field experiment to address these research questions. Feedback was the between-subject experimental factor with two levels: In the experimental condition, students received HIF, whereas students in the control condition received comparably simple feedback (see Sects. "Simple feedback" and "The highly informative feedback using learning analytics" for details). Importantly, the feedback was not provided by instructors but was generated and delivered automatically within the learning platform. This allowed for consistent and scalable personalization across conditions.
Field experiments are embedded or in vivo in that the experimental treatment is introduced to an authentic learning setting (Motz et al., 2018). An advantage of field studies is that they yield data with high external validity. That is, inferences have an increased chance of applying to real-world educational practice, as compared to lab experiments, which are commonly more artificial (Ross et al., 2010). Further, due to the randomization procedure of the experimental treatment, researchers can confidently make causal inferences from the data (Weidlich et al., 2022).
As a result of the design of our study, it is possible to establish the causal effects of the HIF as compared to the simple feedback. This addressed RQ1 of our study. In addition, by including feedback literacy as a moderator variable in our model, we assessed whether this individual difference moderated the main effects of the feedback, thus addressing RQ2.
Procedure
The experiment took place in a first-semester course of a teacher education program at a large German university. At the beginning of the course, we asked students to complete a questionnaire in order to collect demographic information and data about various constructs. As the questionnaire provided a baseline sample description for multiple empirical studies, the instruments described in Sect. "Measures" are only a subset of the complete questionnaire. Throughout the semester, students completed assignments with different learning designs, one of them being the CSCL task that provided the context for our investigation. The task contributed to course completion as one of four required learning activities, of which students had to complete at least three. While it was not graded in a high-stakes manner, students who did not meet the minimum participation requirement did not receive credit. All students received feedback on both their collaborative process and final group product, based on criteria provided in advance. All students who were given this CSCL task—irrespective of their assignment to experimental groups—worked on the same activities that had an identical learning design. The only difference between the experimental and control groups was the type of feedback the students received after completing the assignment. Figure 1 provides a schematic overview of the design of this study.
[See PDF for image]
Fig. 1
Overview of the design of this study. Sequence of events from left to right. HIF = Highly informative feedback; SF = Simple feedback. Assignment to feedback conditions occurred via randomization
After the CSCL task was completed, students were given individual feedback. To access the feedback, students navigated to an element entitled “Your feedback.” This opened a page on which the students were greeted with their names and received either the HIF or simple feedback, depending on their treatment condition. Below the feedback, students were asked to respond to items about their perceptions and evaluations of the feedback received. These items were the dependent variables of this study and will be described in Section “Measures”.
Sample
Overall, N = 296 students across 59 collaborative groups (mean group size = 5.24, minimum group size = 3, maximum group size = 7) participated in the collaborative learning task. Students were first-semester bachelor students studying in a teacher education program at a large German University. Of the total sample, n = 201 responded to the questionnaire items after completing the learning activity and receiving feedback. We excluded four cases as these students did not fulfill the minimum participation requirements to pass the learning task, and we suspected that these cases could introduce bias; their feedback perceptions may be overshadowed by negative reactions to not passing the task. To ensure that this exclusion decision did not impact our results, we report a robustness check in section “Robustness check”. Finally, we excluded one case for excessive straightlining across multiple items and variables. The final sample consisted of n = 196 (women = 156, men = 33, non-binary/diverse = 1). Randomization into feedback groups yielded cell sizes of nCG = 97 for the simple feedback and nEG = 99 for the HIF. The groups did not differ significantly in their gender composition (x2 = 1.69, p = 0.429). Further sample characteristics can be found in Supplementary Material A, available at https://doi.org/10.17605/OSF.IO/JCS6W.
Learning task
In a first-semester course on foundations of teaching and educational assessment, students learned about cognitive activation—an essential dimension of teaching quality involving deep learning, elaboration, and higher-order thinking (Klieme et al., 2001; Praetorius et al., 2018; Junghans, 2022). To help students identify and reason about cognitive activation in classroom settings, a computer-supported collaborative learning (CSCL) task was implemented. CSCL has been shown to enhance conceptual learning and promote collaborative skills (Jeong et al., 2019; Kreijns et al., 2023; Radkowitsch et al., 2020).
Students worked in small groups to evaluate two contrasting classroom videos, aiming to determine which lesson showed greater potential for cognitive activation. The task followed a macro-scripted structure with three phases: (1) identifying relevant teaching activities (with timestamps and descriptions), (2) analyzing the potential for cognitive activation using academic concepts and structured interaction prompts, and (3) composing a joint statement justifying their group decision. Groups collaborated asynchronously over one week in private Moodle forums, with clearly structured threads and task-specific instructions guiding each phase.
Scripts were used to encourage equitable participation and constructive discussion, in line with evidence that scripted collaboration fosters interdependence and productive group work (Kreijns et al., 2023; Vogel et al., 2021). Students were required to participate in each of the three phases by contributing at least one post per phase to receive credit for the task. This ensured individual accountability within the group-based format. A full task description, task instructions to students, and an exemplary joint statement are available in the Supplementary Material B, C, and D, respectively.
Simple feedback
The simple feedback in the control condition informed students whether they met the performance standards for the task, which were based on a minimum number of posts per phase. The feedback included three components: (1) a static summary of task expectations, (2) the student’s number of posts for each phase, providing a basic level of personalization, and (3) an individualized binary report on whether the task was successfully completed. While this feedback was personalized to the extent that it reflected each student’s performance against the standards, it was relatively static and did not offer insights into the quality of the collaborative process or the final product. Of note, this type of limited personalized feedback represented the status quo of previous iterations of this course. Annotated examples of both types of feedback messages (simple and highly informative) are available in Supplementary Material E.
The highly informative feedback using learning analytics
The HIF consisted of two main components: product feedback and process feedback. These components fulfill the key role of informing students about their performance outcome (the quality of their joint statement) and their process toward this goal (communication processes during collaboration). These feedback components were generated using two data sources: in-system behavior and open-answer data, following Deeva et al.’s (2021) classification. In-system behavior, drawn from students’ posts and responses on group-specific Moodle message boards, informed the process feedback. Open-answer data, consisting of the group’s joint statement, served as the basis for product feedback. This mixed feedback model primarily relied on data-driven elements but also incorporated expert-driven components (Deeva et al., 2021). Notably, the HIF also included the simple feedback, ensuring students in the HIF condition received basic performance information, such as task completion and number of posts.
Product component
The product component of the HIF focused on the quality of the joint statement developed by each collaborative group. To generate this feedback, we implemented a natural language processing (NLP) pipeline that analyzed each group’s final statement based on three key quality criteria: (1) appropriate use of academic terminology, (2) evidence-based referencing of the classroom videos using timestamps, and (3) the identification of cognitively activating moments. These criteria reflected the pedagogical goals of the task and were directly aligned with students’ learning objectives.
The feedback was assembled using a rule-based approach, with each section of the product feedback indicating whether and how the respective criterion was met. All students within a group received identical product feedback, as it was based on the shared statement they produced. To ensure validity, the NLP model used to generate this feedback was trained on a corpus of previously submitted student statements, which were manually annotated using a multi-label scheme. The classification system, the training data, and the implementation details of the NLP pipeline are described in Supplementary Material F.
Process component
As described in more detail in Menzel et al. (2023), for the process component, we implemented indicators to extract information on students’ individual communication and discussion style within their group. Thus, most of our indicators worked on the level of communicative patterns and semantic relations between bits of communication. Building on the Group Communication Analysis (GCA) work of Dowell et al. (2019), who have shown that linguistic coordination and cohesion are fruitful in detecting communicational patterns of students, which in turn, well represent the roles that students play in the collaborative process.
In line with previous GCA research, indicators were used to identify clusters of relatively consistent communication behavior patterns, that is, emergent roles that students inhabited during the discussion (e.g., Strijbos & Weinberger, 2010; Saqr & López-Pernas, 2022). This means that no specific roles were assigned; instead, group dynamics and role distribution emerged organically during the collaboration –within the constraints of the provided scripts. The analysis of interaction patterns and communicative behavior yielded seven emergent roles, each with distinctive characteristics, for example, influential actors, drivers, left-behinds, etc. (see Menzel et al., 2023 for full descriptions of all roles).
These roles were then used to personalize feedback messages of the process component. We created feedback text templates for each role, which allowed us to (1) formulate feedback according to the strengths and weaknesses of each collaborative role and (2) design an additional layer of personalization within the roles by varying the text according to key indicators. When, for example, the student took the role of a driver, which was the most active role, they were commended for their performance, but also encouraged, if possible, to actively support other group members to contribute more in the future. When, instead, a student belonged to the cluster of cautious learners, they were reminded to be more coherent in their contributions and to explicitly resolve uncertainty once they became more confident during the discussion.
Feedback provision
Our feedback system generated personalized feedback messages using if–then rules. A similar, well-known tool for such purposes is OnTask (Pardo et al., 2019), which distributes personalized feedback via email messages. We opted against this approach to be able to integrate the feedback directly within the learning management system. We thus hoped to avoid issues such as noise or confounding factors due to students receiving their feedback outside of the controlled online environment. Therefore, a plugin for Moodle was purpose-built to provide personalized feedback messages within the learning environment. This plugin used Mustache, a templating system that executes basic logical operations using templates with placeholders. We developed two templates, one for the HIF and another for the simple feedback. First, the relevant template was triggered according to the experimental condition of a given student. Then, the plugin parsed the template, triggering sections of the template and replacing the placeholder with text according to the analytics indicators of this student. Through this, a personalized feedback message was built. Students accessed their personalized feedback by navigating to the feedback element of the corresponding course section in the Moodle environment.
Measures
Feedback literacy instrument
As part of the baseline questionnaire at the start of the course, student feedback literacy was measured using the 21-item instrument developed by Woitt et al. (2025). This self-report tool assesses feedback literacy across two dimensions: feedback attitudes (e.g., “I believe that I can contribute to the value of feedback processes”) and feedback practices (e.g., “I reconsider and refine my learning strategies based on feedback”). With its parsimonious two-factor structure, the instrument captures key facets of the construct (Carless & Boud, 2018; Molloy et al., 2020). Notably, it includes feedback practices, covering behavioral engagement with feedback, which has been underrepresented in earlier instruments that focused primarily on attitudes (e.g., Song, 2022; Yildiz et al., 2022).
Woitt et al. (2025) described the instrument's psychometric properties, including internal consistency, factorial validity, and the ability to distinguish ability levels, with negligible differential item functioning by study program or gender. Students rated items on a five-point Likert scale from completely disagree to completely agree. In this study, feedback attitudes (M = 4.13, SD = 0.51, Min = 1.33, Max = 5) and feedback practices (M = 3.76, SD = 0.53, Min = 1.08, Max = 4.83) showed high internal consistency (Cronbach’s α = 0.84 and 0.85, respectively). These properties were supported in independent samples and through confirmatory analyses by Weidlich et al. (2025b).
Feedback perceptions
The feedback perception items were drawn from Scheffel’s (2017) validated framework of learning analytics quality indicators, focusing on criteria relevant to student perceptions. These criteria build on formative assessment (Black & Wiliam, 2009; Sadler, 1989) and self-regulated learning models (Butler & Winne, 1995; Winne & Hadwin, 1998). We condensed these ideas into a feedback perception outcome framework (FPOF; see Fig. 2), informed by research on student perceptions and feedback processing (e.g., Garino, 2020; Jonsson, 2013; Strijbos et al., 2021). The framework posits that feedback must first be understandable (Fig. 2, left) before it can be seen as helpful, and ultimately motivating for future learning (Fig. 2, right).
[See PDF for image]
Fig. 2
Feedback perception outcome framework
Beyond these broad perceptions, we included items targeting feedback perceptions specific to self-regulation (see horizontal boxes in Fig. 2). These cover: (2a) understanding learning progress, (2b) reflecting on learning behavior, and (2c) regulating learning behavior. Together, they capture how feedback helps students assess, interpret, and act on performance gaps. These items build on self-regulation research (Matcha et al., 2019) and internal feedback models (Nicol & McFarlane-Dick, 2006). Positioned between understandability and helpfulness in the FPOF, they reflect the assumption that metacognitive insights—once basic comprehension is achieved—enhance perceived helpfulness and, ultimately, learning motivation.
Descriptive data for the items of the framework were: “This feedback was understandable” (M = 3.37, SD = 0.7, Mdn = 3), “This feedback is helpful for my ongoing learning” (M = 2.76, SD = 0.88, Mdn = 3), “This feedback is helpful for understanding my learning progress” (M = 2.67, SD = 0.87, Mdn = 3), “This feedback is helpful for reflecting on my learning behavior” (M = 2.65, SD = 0.89, Mdn = 3), “This feedback helps me regulate my learning” (M = 2.66, SD = 0.88, Mdn = 3), “This feedback is motivating for my learning” (M = 2.83, SD = 0.84, Mdn = 3).
Analysis
To address our research questions, we estimated proportional odds cumulative link models, a type of generalized linear model suitable for ordered categorical outcomes like our feedback perception items. This modeling approach avoids the assumption of interval-level measurement and provides a better fit for Likert-type data.
Given that students worked in collaborative groups, we accounted for clustering in the data by employing multilevel modeling. Group membership was included as a random intercept, along with a random slope for the feedback condition, allowing us to model both between-group variance and differential responses to feedback across groups (Cress, 2008; Janssen et al., 2013). Multilevel modeling improves estimate precision even with modest group sizes (Clarke, 2008; Huang, 2018), and was thus applied throughout.
Fixed effects in the model included the feedback condition (HIF vs. simple), centered scores for the two feedback literacy dimensions (attitudes and practices), and their interactions with the feedback condition. Model coefficients are reported as odds ratios (Exp[B]), where values above one indicate greater odds of higher item ratings per unit increase in the predictor. The average cluster size after exclusions was 3.32 students per group after exclusions (see “Sample”). Variance components were inspected to quantify group-level effects, and intraclass correlation coefficients (ICCs) were used to assess the extent of clustering. Model fit was summarized using marginal and conditional R2 values, capturing the variance explained by fixed effects alone and by the full model, respectively.
All analyses were conducted using the GAMLj module (Version 3.2.7) in jamovi (Version 2.4.8). The model formula was as follows:
Outcome ~ FB + FL_A + FL_B + FB:FL_A + FB:FL_B + (1 FB | group_id).
Results
Effects of HIF
To address the first research question, “How effective is highly informative feedback using learning analytics as perceived by students?” in the following, we report the models for each feedback perception individually. We first report the results for the broad feedback perceptions (understandable, helpful, and motivating) before reporting the results for the self-regulation-related feedback perceptions (understand learning progress, reflect on learning behavior, and regulate learning behavior). Figure 3 provides descriptive graphs for the main effects.
[See PDF for image]
Fig. 3
Response category frequencies for outcome variables as a function of feedback condition
Was this feedback understandable?
Feedback understandability was not significantly affected by condition or feedback literacy dimensions (see Table 1). Random variance and ICC were zero, indicating no group-level clustering. The model explained minimal variance (R2 = 0.01), suggesting that neither the feedback type nor individual differences in feedback literacy influenced how understandable students found the feedback.nor the feedback condition itself seemed to influence the understandability of the feedback.
Table 1. Proportional odds cumulative link models for broad feedback perceptions
Understandable? | Helpful for ongoing learning? | Motivating for ongoing learning? | ||||
|---|---|---|---|---|---|---|
Fixed Effects Parameters | Coefficient (SE) | Exp(B) | Coefficient (SE) | Exp(B) | Coefficient (SE) | Exp(B) |
Threshold | ||||||
1|2 | − 3.90 (0.51) *** | 0.02 | − 2.57 (0.29) *** | 0.08 | − 3.02 (0.32) *** | 0.05 |
2|3 | − 2.32 (0.25) *** | 0.10 | − 0.79 (0.18) *** | 0.45 | − 0.80 (0.17) *** | 0.45 |
3|4 | 0.08 (0.14) | 1.09 | 1.61 (0.21) *** | 4.98 | 1.40 (0.19) *** | 4.07 |
HIF—SF | 0.16 (0.27) | 1.18 | 1.46 (0.31) *** | 4.30 | 0.18 (0.28) | 1.19 |
FL Attit | 0.32 (0.38) | 1.37 | − 0.56 (0.39) | 0.57 | 0.75 (0.37) * | 2.11 |
FL Pract | − 0.17 (0.33) | 0.85 | 0.06 (0.33) | 1.06 | 0.44 (0.34) | 1.55 |
FB x FL Attit | 0.27 (0.76) | 1.31 | 1.74 (0.79) * | 5.71 | 1.66 (0.74) * | 5.25 |
FB x FL Pract | − 0.35 (0.67) | 0.70 | − 0.85 (0.67) | 0.43 | 0.23 (0.67) | 1.26 |
Random Components | Variance | ICC | Variance | ICC | Variance | ICC |
|---|---|---|---|---|---|---|
Intercept | Group | 0.00 | 0.00 | 0.24 | 0.07 | 0.00 | 0.03 |
Condition | Group | 0.00 | 0.22 | 0.15 | |||
Residual | 3.29 | 3.29 | 3.29 | |||
R2 marginal | .01 | .17 | .16 | |||
R2 conditional | .01 | .22 | .16 |
*p <.05; ** p <.01; *** p <.001. Significant threshold coefficients are not highlighted in bold because they are not of substantive interest. SF = simple feedback. FB = feedback condition (HIF–SF)
Was this feedback helpful for students’ ongoing learning?
The model for perceived helpfulness explained a considerable amount of variance, with an R2cond of 0.22 (see Table 1), with random effects indicating substantial group-level variability (see Table 1). Helpfulness ratings were more clustered by group than understandability ratings. The HIF condition had a significant positive effect: students were over four times more likely to rate the feedback as helpful compared to the control group. Additionally, a significant interaction emerged––students with higher feedback attitudes rated the HIF as especially helpful, indicating that perceptions of helpfulness were shaped by both feedback type and individual differences in feedback literacy.
Was this feedback motivating for students ‘ ongoing learning?
For motivation, there was no overall effect of feedback condition (see Table 1). However, feedback attitudes significantly predicted motivation perceptions, and a strong interaction emerged: each one-unit increase in attitudes increased the odds of a more positive rating by over five times in the HIF condition. The practices dimension showed no interaction. Although random intercept variance was zero, random slope variance indicated group-level differences in how the HIF influenced motivation. These results suggest that students’ motivational responses to feedback depend on individual attitudes and vary across collaborative contexts.
Did this feedback help students to understand their learning progress?
Feedback condition significantly predicted whether students felt the feedback helped them understand their learning progress (see Table 2). Students in the HIF condition had nearly triple the odds of endorsing higher agreement. No effects or interactions emerged for the feedback literacy dimensions. The model explained 16% of the variance (R2cond), with 6% due to group-level differences. Both intercepts and slopes varied by group, indicating that group context influenced how students interpreted progress-related aspects of the feedback.
Table 2. Proportional odds cumulative link models for self-regulation-related feedback perceptions
Understand learning progress? | Reflect on learning behavior? | Regulate learning behavior? | ||||
|---|---|---|---|---|---|---|
Fixed Effects Parameters | Coefficient (SE) | Exp(B) | Coefficient (SE) | Exp(B) | Coefficient (SE) | Exp(B) |
Threshold | ||||||
1|2 | − 2.53 (0.29) *** | 0.08 | − 2.53 (0.29) *** | 0.08 | − 2.44 (0.25) *** | 0.09 |
2|3 | − 0.33 (0.16) * | 0.72 | − 0.22 (0.17) | 0.80 | − 0.31 (0.15) * | 0.74 |
3|4 | 1.71 (0.20) *** | 5.55 | 1.64 (0.21) *** | 5.18 | 1.71 (0.20) *** | 5.52 |
HIF—SF | 1.04 (0.29) *** | 2.82 | 1.26 (0.31) *** | 3.53 | 1.43 (0.28) *** | 4.19 |
FL Attit | − 0.46 (0.39) | 0.63 | − 0.15 (0.37) | 0.86 | − 0.01 (0.36) | 0.99 |
FL Pract | 0.38 (0.33) | 1.47 | 0.17 (0.33) | 1.18 | 0.30 (0.33) | 1.35 |
FB x FL Attit | 0.83 (0.77) | 2.30 | 0.74 (0.75) | 2.10 | 0.45 (0.73) | 1.57 |
FB x FL Pract | − 1.15 (0.66) | 0.32 | − 0.45 (0.67) | 0.63 | − 0.17 (0.65) | 0.84 |
Random Components | Variance | ICC | Variance | ICC | Variance | ICC |
|---|---|---|---|---|---|---|
Intercept | Group | 0.25 | 0.07 | 0.20 | 0.09 | 0.00 | 0.03 |
Condition | Group | 0.25 | 0.31 | 0.00 | |||
Residual | 3.29 | 3.29 | 3.29 | |||
R2 marginal | .10 | .11 | .15 | |||
R2 conditional | .16 | .18 | .15 |
*p <.05; ** p <.01; *** p <.001. Significant threshold coefficients are not highlighted in bold because they are not of substantive interest. SF = simple feedback. FB = feedback condition (HIF–SF)
Did this feedback help students to reflect on their learning behavior?
The HIF significantly improved students'reflection on their learning behavior (see Table 2). Students in the experimental condition had nearly four times the odds of selecting higher response categories. The model explained 18% of the variance (R2cond), with fixed effects accounting for most of it (R2marg = 0.11). Substantial group-level variation in intercepts and slopes, along with a high ICC, suggests that collaborative context shaped students'reflective interpretations. No significant effects or interactions emerged for feedback literacy dimensions.
Did this feedback help students regulate their learning behavior?
Feedback on learning regulation was significantly more effective in the HIF condition (see Table 2), with students over four times more likely to report higher agreement. Feedback literacy dimensions had no significant effects or interactions. Random effects explained none of the variance (R2cond = 0.15), and both intercept and slope variance were zero, suggesting that perceptions of feedback utility for regulation were consistent across groups.
What was the role of feedback literacy?
To address our second research question—how feedback literacy moderates student perceptions of highly informative feedback—we found that effects varied by feedback literacy level. Specifically, the attitudes dimension moderated responses to helpfulness and motivation items (see Table 1), while no effects emerged for the practices dimension or other outcomes. Figure 4 illustrates these interactions using response category probabilities by condition and attitude levels.
[See PDF for image]
Fig. 4
Response category probabilities (y-axis) for helpfulness (left) and motivation perceptions (right) plotted against feedback attitudes (x-axis). Response options 4 through 1 (top to bottom) are shown for the HIF (left-hand column) and the simple feedback (right-hand column)
In the simple feedback condition, students were less likely to perceive the feedback as helpful, as reflected in the lower probability of positive responses and higher likelihood of choosing the most negative category—illustrating the main effect of feedback condition. In the HIF condition, feedback attitudes had little influence (as shown by flat lines in Fig. 4). In contrast, in the simple feedback condition, more positive attitudes were unexpectedly associated with stronger rejection of the feedback, as seen in the rising probability of the lowest response category. For motivation, the HIF was more positively received by students with stronger feedback attitudes: the probability of selecting the highest category rose with more favorable attitudes. This suggests that while HIF was not broadly seen as motivating, students high in feedback attitudes appreciated it more. The pattern was reversed for negative categories, which declined with more positive attitudes. In contrast, motivation ratings for the simple feedback were mostly unaffected by attitudes, except for a slight drop in the top category at very low attitude levels.
Robustness check
To check whether excluding non-passing students (n = 4) influenced results, we re-estimated all models with these cases included (see Supplementary Material G). The models explained slightly less variance on average (≈ 0.02) via R2 conditional, suggesting minor noise from these students, perhaps due to failing the assignment and thus reacting differently to feedbacl. Crucially, the key interaction effects for helpfulness and motivation remained intact, supporting the robustness of our findings. One exception emerged: feedback practices moderated perceptions of learning progress, but only in the simple feedback group. However, as this may have been an artifact of students’ reactions to their success (or failure) in the learning task, these findings will not be further discussed in the following sections.
Discussion
Effectiveness of highly informative learning analytics feedback
Given the well-established benefits of feedback for learning, using technology to deliver personalized feedback at scale is a compelling aim. The HIF approach leverages learning analytics and AI to provide students with targeted insights into both their collaborative process and the quality of their joint product. This discussion addresses our first research question: How effective is HIF, as perceived by students?
Our results indicate that students found the HIF significantly more helpful for ongoing learning and more effective in fostering understanding of their learning progress compared to simple feedback. Moreover, HIF supported reflection and regulation of learning behavior—key aspects of self-regulated learning (Nicol & McFarlane-Dick, 2006). These findings are in line with previous evidence showing that rich, detailed feedback tends to outperform simpler forms (Wisniewski et al., 2020; Van der Kleij et al., 2015). Importantly, both feedback types were rated as understandable, suggesting that the observed effects on self-regulation are not attributable to differences in comprehension. This is noteworthy given that students often struggle to interpret feedback meaningfully (Bouwer & Dirkx, 2023; Weaver, 2006), which can lead to disengagement (Burke, 2009). Feedback that is easily understood—even when superficially processed—lays the foundation for deeper engagement (Jonsson, 2013; Winstone et al., 2017). Our findings suggest that HIF achieved this while also fostering metacognitive and self-regulatory insights.
Feedback helpfulness is widely regarded as central to student perceptions of feedback (e.g., “usefulness” in Strijbos et al., 2021). In our study, students rated HIF as more helpful for their ongoing learning than the simple feedback. Such positive evaluations reflect students’ recognition of the feedback’s potential to support their learning and suggest that they could make sense of the feedback content (Garino, 2020; Jivet et al., 2020). These global perceptions of helpfulness are important, as they increase students’ willingness to engage with feedback and act on it. While the simple feedback conveyed critical information about task completion and participation levels, the added depth of HIF—especially its insights into group processes and product quality—was clearly valued by students.
The lack of a main effect of HIF on motivation aligns with prior research suggesting that detailed feedback does not necessarily enhance motivation (Fong et al., 2019; Wisniewski et al., 2020). In fact, richly detailed feedback can include more critical elements, which some students may find demotivating (Fong et al., 2018a, 2018b, 2019). Nevertheless, our findings suggest that HIF did not reduce motivation overall. This indicates that its benefits—in terms of insight and support for learning—outweigh any potential drawbacks related to critical content. It offers a promising picture of how detailed feedback can inform without undermining student motivation.
Students’ reported gains in (meta-)cognitive and self-regulatory insights can be interpreted through the lens of the FPOF. Those in the HIF condition reported a better understanding of their learning progress—indicating they could more clearly assess the gap between current performance and learning goals, a core aim of formative feedback (Black & Wiliam, 2009; Sadler, 1989). In our study, this gap was addressed via both the process and product components of HIF, offering insight into (1) how students'communication aligned with productive collaboration and (2) how well their joint statement met quality criteria. Students also reported improved reflection on their learning behavior, suggesting a clearer awareness of strengths and weaknesses. This enhanced understanding enabled them to adjust their approach, supporting learning regulation. Altogether, these findings show that HIF fosters key self-regulatory perceptions—central to promoting adaptive learning behaviors.
Finally, we observed that some feedback perceptions—especially helpfulness, understanding learning progress, and reflecting on learning behavior—varied by collaborative group, while others, including understandability, motivation, and regulation, were unaffected by group clustering. This suggests that the collaborative dynamics shaped how students evaluated certain aspects of the feedback. At the same time, the consistency of effects across other outcomes indicates that the benefits of HIF are robust and not merely a function of group composition. From an analytical standpoint, these findings support the value of multilevel modeling in CSCL research (Cress, 2008; Janssen et al., 2013).
Overall, our findings suggest that HIF is a desirable form of feedback. Students derived richer, learning-relevant insights from it, particularly in terms of reflection and self-regulation. Crucially, this more elaborate feedback did not result in uniformly negative motivational reactions, alleviating concerns that critical or dense feedback might undermine student engagement. Instead, HIF appeared to support key aspects of formative assessment without eliciting strong negative responses, underscoring its value for future learning interventions.
The role of feedback literacy
Feedback literacy—students’ understanding of, dispositions toward, and capacities to use feedback (Carless & Boud, 2018)—offers a crucial learner-centered lens on how feedback is processed. While feedback design principles remain central (Kluger & DeNisi, 1996), understanding how students interpret and react to feedback is equally important. Our analysis integrated both perspectives by examining feedback literacy as a moderator of feedback perceptions, addressing our second research question: How does student feedback literacy moderate the effects of highly informative feedback using learning analytics?
The key finding is that the attitudes dimension of feedback literacy interacted with student perceptions of feedback helpfulness and motivation. This dimension captures openness to feedback, beliefs about its role, and students’ sense of agency (Woitt et al., 2025). Conceptually, these facets are likely to shape how students receive and appraise feedback. In our study, students with stronger feedback attitudes judged HIF and simple feedback differently, particularly regarding its perceived usefulness and motivational value—underscoring how students’ preconceptions shape their experience of different feedback designs.
We found that students with less productive feedback attitudes perceived the simple feedback as nearly as helpful as the HIF, while those with stronger attitudes rated the simple feedback notably lower. Perceptions of helpfulness are a core part of students’ initial appraisal of feedback and figure prominently in measures of feedback receptivity (Lipnevich et al., 2021; Strijbos et al., 2021). Feedback-literate students are more attuned to performance-related information and actively seek guidance for improvement—so they likely saw the simple feedback as insufficient. In contrast, students with less beneficial attitudes appreciated basic performance confirmation, which both feedback types offered. This supports the view that perceptions of helpfulness are shaped by students’ expectations and beliefs about feedback (Jonsson, 2013), and our findings extend this by showing that feedback attitudes moderate this effect depending on the richness of the feedback.
Similarly, students with more positive feedback attitudes found the HIF more motivating than peers with less favorable attitudes, whereas attitudes had little effect on motivation ratings for simple feedback. This aligns with self-determination theory (Deci & Ryan, 2000), which suggests that detailed feedback can be perceived as autonomy-supportive or controlling, depending on students’ predispositions. While feedback-literate students may view rich feedback as empowering, others may find it overwhelming or overly critical (Nicol & McFarlane-Dick, 2006; Fong et al., 2019). These results suggest that students’ motivation to act on feedback is not determined by feedback design alone, but also by their readiness to process it. Thus, preparing students—by building awareness of feedback’s purpose and supporting their interpretive skills—could enhance engagement with high-information feedback.
Interestingly, we found no moderating effects of feedback literacy on the other feedback perception outcomes. This is noteworthy because one might expect feedback literacy—especially given its links to self-regulated learning—to shape how students interpret feedback related to reflection or regulation. For instance, the practices dimension captures metacognitive engagement, such as adapting learning strategies or using feedback to plan next steps. Yet, our findings suggest that even students with lower levels of feedback literacy perceived the HIF as offering valuable self-regulatory insights. This indicates that well-designed, pedagogically grounded feedback can support reflection and regulation regardless of prior literacy, a promising result for designing inclusive feedback systems.
That said, the feedback practices dimension did not moderate any outcomes in our study. One interpretation is that this dimension, which emphasizes concrete feedback-use behaviors, primarily affects what students do after they process feedback—rather than how they initially perceive it. This view aligns with broader conceptualizations of feedback literacy, where taking action is often framed as the final step (Carless & Boud, 2018). Because our study focused on immediate perceptions rather than follow-up behavior, it is plausible that practices-related effects were not yet observable. Future research could explore this by tracking whether students with stronger feedback practices are more likely to implement suggestions, make revisions, or modify their study behavior based on feedback.
In sum, our findings have several implications. First, they suggest that feedback literacy—particularly the attitudes dimension—can buffer students from the potential demotivating effects of detailed feedback (Wisniewski et al., 2020; Fong et al., 2019). This implies that educators should account for students’ readiness to engage with complex feedback, especially when it includes critique. Second, students with stronger feedback attitudes found simple feedback less helpful, indicating that such students may not benefit from performance-only messages. In contexts where highly personalized feedback is resource-intensive, this insight supports a more strategic allocation of effort—tailoring richer feedback to students most likely to benefit.
Although our results don’t provide universal support for prioritizing feedback literacy development, as recently advocated (Little et al., 2024), they underscore its relevance in shaping how feedback is received. At present, enhancing feedback quality itself may yield the most immediate benefits, particularly in higher education settings where instructional resources are stretched. Our results point to the value of investing in detailed, data-informed feedback that supports learning and reflection across ability levels. As research into feedback literacy advances, a dual strategy—improving both feedback design and students’ capacity to use it—may prove most effective for fostering meaningful learning.
For effective feedback design, this study suggests layering process-focused and product-focused feedback, tailoring messages to individual learners’ roles or participation patterns, and aligning feedback indicators tightly with the task’s learning objectives. These design principles, coupled with established best practices (e.g. clarity, actionable guidance, and timely delivery) and learning analytics techniques for personalization, can make feedback more immediately useful and relevant to students. On the other hand, educational institutions should cultivate students’ feedback literacy by embedding it into the curriculum through structured activities ––for instance, guided self-assessment exercises, peer review sessions, opportunities for students to proactively request feedback, and reflective e-portfolio use–– to build learners’ skills and dispositions in engaging with feedback (Coppens et al., 2025; Malecka et al., 2022; Winstone et al., 2019). By simultaneously enhancing the quality of feedback and students’ ability to leverage it, instructors and institutions can maximize benefits of feedback provisions in higher education.
Limitations
This study has some limitations, the most important of which are noted here. One limitation stems from the usage of single-item measures for feedback perceptions. This allowed for a seamless experience for students between processing the feedback and reporting their perceptions because a small set of items was displayed directly below the feedback. In addition, this kept response effort and, presumably, survey fatigue low. However, the baseline questionnaire was relatively long and the learning tasks were repeated in the accompanying post-questionnaires. Multi-item measures have the benefit of allowing for latent variable modeling; future research may reap the psychometric benefits of that approach by using comprehensive instruments to measure their dependent variables.
While we found a quantitative approach to estimating the role of feedback literacy a fruitful change of perspective for this literature, qualitative and mixed-methods approaches could provide deeper insights into how students interpret and engage with feedback. In particular, mixed methods approaches are a promising avenue for future research with regards to both gathering causal evidence and representing the complexities of feedback processes.
Further, our study focused on feedback perceptions. As feedback perceptions and subsequent appraisals of feedback are an essential element of feedback processing (Winstone & Nash, 2023), this is not a limitation per se. However, the ultimate gauge of feedback effects is behavior change and learning achievement. Based on insights into feedback processing, subsequent research should aim to establish the objective learning effectiveness of HIF and the role of feedback literacy within this context.
Due to our focus on one specific type of HIF, the results of the comparison of feedback conditions in our field experiment will likely not apply to all learning contexts. This also applies to our results concerning feedback literacy, which may play a quite different role in other feedback interventions, for example, when feedback is not generated via learning analytics. Forthcoming research across many different feedback interventions and learning tasks will hopefully lead to a more comprehensive and nuanced view of feedback literacy in the context of technology-facilitated feedback.
Lastly, our findings emerged from a computer-supported collaborative learning (CSCL) learning scenario, characterized by social interdependence, shared responsibility, and rich communicative interactions. Such contexts may shape students’ feedback perceptions differently than individual tasks—for instance, students might interpret feedback about group processes through the lens of social dynamics such as cohesion or perceived equity. Additionally, individual feedback literacy could influence both personal engagement with feedback and how feedback meaning is negotiated within groups. Future research should investigate how these collaborative aspects specifically interact with feedback literacy and perceptions, ideally through direct comparisons with individual learning scenarios.
Conclusions
This study shows that highly informative feedback generated through learning analytics is generally perceived as more effective than simpler feedback, especially by students with higher feedback literacy. Compared to simple feedback, HIF was experienced as more helpful and insightful, supporting students’ reflection and self-regulation. These findings underline the value of investing in well-designed feedback systems and highlight feedback literacy as an important individual difference in how feedback is received. As personalized feedback remains difficult to scale in higher education, our results point to the potential of learning analytics to deliver meaningful support—provided that students are adequately prepared to use it.
Acknowledgements
The authors would like to thank Gráinne Newcombe for her valuable and comprehensive comments on an earlier version of this manuscript.
Author contributions
AFI and AFr developed the research design in conjunction with HD, IJ, YJ, and JW. JW led the data analysis and interpretation as well as manuscript preparation and writing. All authors contributed to sections of the manuscript and have read and approved the final manuscript before submission.
Funding
Distr@l – Förderprogramm Digitalisierung stärken – Transfer leben. Ministerium für Digitale Strategie und Entwicklung. Open Access funded by Project DEAL.
Availability of data and materials
Data for this research can be made available on request to the corresponding author. Supplementary material can be found online at the Open Science Framework (OSF) at https://doi.org/10.17605/OSF.IO/JCS6W.
Declarations
Competing interests
The authors declare no competing interests.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Ariely, M; Nazaretsky, T; Alexandron, G. Causal-mechanical explanations in biology: Applying automated assessment for personalized learning in the science classroom. Journal of Research in Science Teaching; 2024; [DOI: https://dx.doi.org/10.1002/tea.21929]
Banihashem, SK; Noroozi, O; van Ginkel, S; Macfadyen, LP; Biemans, HJ. A systematic review of the role of learning analytics in enhancing feedback practices in higher education. Educational Research Review; 2022; [DOI: https://dx.doi.org/10.1016/j.edurev.2022.100489]
Biedermann, D., Schneider, J., Ciordas-Hertel, G. P., Eichmann, B., Hahnel, C., Goldhammer, F., & Drachsler, H. (2023). Detecting the Disengaged Reader-Using Scrolling Data to Predict Disengagement during Reading. In LAK23: 13th International Learning Analytics and Knowledge Conference (pp. 585–591).
Black, P; Wiliam, D. Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability (Formerly: Journal of Personnel Evaluation in Education); 2009; 21, pp. 5-31.
Boud, D; Dawson, P. What feedback literate teachers do: an empirically-derived competency framework. Assessment & Evaluation in Higher Education; 2023; 48,
Bouwer, R; Dirkx, K. The eye-mind of processing written feedback: Unraveling how students read and use feedback for revision. Learning and Instruction; 2023; 85, 101745.
Broadbent, J; Poon, WL. October). Self-regulated learning strategies and academic achievement in online higher education learning environments: A systematic review. The Internet and Higher Education; 2015; 27, pp. 1-13.
Brookhart, SM. Lipnevich, AA; Smith, JK. Summative and formative feedback. Cambridge handbook of instructional feedback; 2018; Cambridge University Press: pp. 52-78.
Burke, D. Strategies for using feedback students bring to higher education. Assessment and Evaluation in Higher Education; 2009; 34, pp. 41-50.
Butler, DL; Winne, PH. Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research; 1995; 65,
Carless, D; Boud, D. The development of student feedback literacy: Enabling uptake of feedback. Assessment and Evaluation in Higher Education; 2018; 43,
Cavalcanti, AP; Barbosa, A; Carvalho, R; Freitas, F; Tsai, YS; Gašević, D; Mello, RF. Automatic feedback in online learning environments: A systematic literature review. Computers and Education: Artificial Intelligence; 2021; 2, 100027.
Clarke, P. When can group level clustering be ignored? Multilevel models versus single-level models with sparse data. Journal of Epidemiology and Community Health; 2008; 62,
Coppens, K; Van den Broeck, L; Winstone, N; Langie, G. A mixed method approach to exploring feedback literacy through student self-reflection. Assessment and Evaluation in Higher Education; 2025; 50,
Cress, U. The need for considering multilevel analysis in CSCL research—An appeal for the use of more advanced statistical methods. International Journal of Computer-Supported Collaborative Learning; 2008; 3, pp. 69-84.
Dawson, P; Yan, Z; Lipnevich, A; Tai, J; Boud, D; Mahoney, P. Measuring what learners do in feedback: The feedback literacy behaviour scale. Assessment and Evaluation in Higher Education; 2024; 49,
Deeva, G; Bogdanova, D; Serral, E; Snoeck, M; De Weerdt, J. A review of automated feedback systems for learners: Classification framework, challenges and opportunities. Computers & Education; 2021; 162, 104094.
Dent, AL; Koenka, AC. The relation between self-regulated learning and academic achievement across childhood and adolescence: A meta-analysis. Educational Psychology Review; 2016; 28,
Dowell, NM; Nixon, TM; Graesser, AC. Group communication analysis: A computational linguistics approach for detecting sociocognitive roles in multiparty interactions. Behavior Research Methods; 2019; 51, pp. 1007-1041.
Drachsler, H. Towards highly informative learning analytics; 2023; Open Universiteit:
Fong, CJ; Patall, EA; Vasquez, AC; Stautberg, S. A meta-analysis of negative feedback on intrinsic motivation. Educational Psychology Review; 2019; 31, pp. 121-162.
Fong, CJ; Schallert, DL; Williams, KM; Williamson, ZH; Warner, JR; Lin, S; Kim, YW. When feedback signals failure but offers hope for improvement: A process model of constructive criticism. Thinking Skills and Creativity; 2018; 30, pp. 42-53.
Fong, CJ; Williams, KM; Williamson, ZH; Lin, S; Kim, YW; Schallert, DL. “Inside out”: Appraisals for achievement emotions from constructive, positive, and negative feedback on writing. Motivation and Emotion; 2018; 42, pp. 236-257.
Garino, A. Ready, willing and able: A model to explain successful use of feedback. Advances in Health Sciences Education; 2020; 25,
Gašević, D; Dawson, S; Siemens, G. Let’s not forget: Learning analytics are about learning. TechTrends; 2015; 59, pp. 64-71.
Gombert, S; Fink, A; Giorgashvili, T; Jivet, I; Di Mitri, D; Yau, J; Drachsler, H. From the automated assessment of student essay content to highly informative feedback: A case study. International Journal of Artificial Intelligence in Education; 2024; 34,
Guasch, T; Espasa, A. Collaborative writing online: Unravelling the feedback process. Learning and Teaching Writing Online; 2015; Brill: pp. 13-30.
Guzmán-Valenzuela, C; Gómez-González, C; Rojas-Murphy Tagle, A. Learning analytics in higher education: A preponderance of analytics but very little learning?. International Journal of Educational Technology in Higher Education; 2021; 18, 23. [DOI: https://dx.doi.org/10.1186/s41239-021-00258-x]
Hattie, J; Timperley, H. The power of feedback. Review of Educational Research; 2007; 77,
Henderson, M; Molloy, E; Ajjawi, R; Boud, D. Designing feedback for impact. The impact of feedback in higher education: Improving assessment outcomes for learners; 2019; Springer International Publishing: pp. 267-285.
Henderson, M; Phillips, M; Ryan, T; Boud, D; Dawson, P; Molloy, E; Mahoney, P. Conditions that enable effective feedback. Higher Education Research and Development; 2019; 38,
Huang, FL. Multilevel modeling myths. School Psychology Quarterly; 2018; 33,
Janssen, J; Cress, U; Erkens, G; Kirschner, PA. Multilevel analysis for the analysis of collaborative learning. The international handbook of collaborative learning; 2013; Routledge: pp. 112-125.
Jeong, H; Hmelo-Silver, CE; Jo, K. Ten years of computer-supported collaborative learning: A meta-analysis of CSCL in STEM education during 2005–2014. Educational Research Review; 2019; 28, 100284.
Jivet, I., Scheffel, M., Specht, M., & Drachsler, H. (2018). License to evaluate: Preparing learning analytics dashboards for educational practice. In Proceedings of the 8th international conference on learning analytics and knowledge (pp. 31–40).
Jivet, I; Scheffel, M; Schmitz, M; Robbers, S; Specht, M; Drachsler, H. From students with love: An empirical study on learner goals, self-regulated learning and sense-making of learning analytics in higher education. The Internet and Higher Education; 2020; 47, 100758.
Jonsson, A. Facilitating productive use of feedback in higher education. Active Learning in Higher Education; 2013; 14,
Junghans, C. (2022). Beobachtung und Beurteilung von Lehr- Lernprozessen–eine Professionalisierungsgelegenheit mit Doppeldeckerpotenzial. In Seminar (Vol. 28, pp. 116–136). wbv Publikation.
Karademir, O; Di Mitri, D; Schneider, J; Jivet, I; Allmang, J; Kubsch, M; Neumann, K; Drachsler, H. I don’t have time! But keep me in the loop: Co-designing requirements for a learning analytics cockpit with teachers. Journal of Computer Assisted Learning; 2024; [DOI: https://dx.doi.org/10.1111/jcal.12997]
Keuning, H; Jeuring, J; Heeren, B. A systematic literature review of automated feedback generation for programming exercises. ACM Transactions on Computing Education; 2018; 19,
Klieme, E., Schümer, G., & Knoll, S. (2001). Mathematikunterricht in der Sekundarstufe I: “Aufgabenkultur” und Unterrichtsgestaltung. In BMBF (Ed.), TIMSS—Impulse für Schule und Unterricht, Forschungsbefunde, Reforminitiativen, Praxisberichte und Video-Dokumente (pp. 43–58). Bonn: BMBF.
Kluger, AN; DeNisi, A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin; 1996; 119,
Knight, S; Shibani, A; Abel, S; Gibson, A; Ryan, P. AcaWriter: A learning analytics tool for formative feedback on academic writing. Journal of Writing Research; 2020; [DOI: https://dx.doi.org/10.17239/jowr-2020.12.01.06]
Kreijns, K; Weidlich, J; Kirschner, PA. Pitfalls of social interaction in online group learning. Cambridge handbook of Cyber behavior; 2023; Cambridge University Press:
Lim, LA; Dawson, S; Gašević, D; Joksimović, S; Pardo, A; Fudge, A; Gentili, S. Students’ perceptions of, and emotional responses to, personalised learning analytics-based feedback: An exploratory study of four courses. Assessment and Evaluation in Higher Education; 2021; 46,
Lipnevich, AA; Gjicali, K; Asil, M; Smith, JK. Development of a measure of receptivity to instructional feedback and examination of its links to personality. Personality and Individual Differences; 2021; 169, 110086.
Little, T; Dawson, P; Boud, D; Tai, J. Can students’ feedback literacy be improved? A scoping review of interventions. Assessment and Evaluation in Higher Education; 2024; 49,
Malecka, B; Boud, D; Carless, D. Eliciting, processing and enacting feedback: Mechanisms for embedding student feedback literacy within the curriculum. Teaching in Higher Education; 2022; 27,
Matcha, W; Gašević, D; Pardo, A. A systematic review of empirical studies on learning analytics dashboards: A self-regulated learning perspective. IEEE Transactions on Learning Technologies; 2019; 13,
Menzel, L; Gombert, S; Weidlich, J; Fink, A; Frey, A; Drachsler, H. Why you should give your students automatic process feedback on their collaboration: Evidence from a randomized experiment. European Conference on Technology Enhanced Learning; 2023; Springer Nature Switzerland: pp. 198-212.
Molloy, E; Boud, D; Henderson, M. Developing a learning-centred framework for feedback literacy. Assessment and Evaluation in Higher Education; 2020; 45,
Motz, BA; Carvalho, PF; de Leeuw, JR; Goldstone, RL. Embedding experiments: Staking causal inference in authentic educational contexts. Journal of Learning Analytics; 2018; 5,
Mousavi, A; Schmidt, M; Squires, V; Wilson, K. Assessing the effectiveness of student advice recommender agent (SARA): The case of automated personalized feedback. International Journal of Artificial Intelligence in Education; 2021; 31, pp. 603-621.
Narciss, S. Jonassen, D; Spector, MJ; Driscoll, M; Merrill, MD; Merrienboer, J; Driscoll, MP. Feedback strategies for interactive learning tasks. Handbook of research on educational communications and technology; 2008; 3 Routledge: pp. 125-144.
Nicol, DJ; Macfarlane-Dick, D. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in higher education; 2006; 31,
Nieminen, JH; Carless, D. Feedback literacy: A critical review of an emerging concept. Higher Education; 2023; 85,
Panadero, E., Lipnevich, A., Broadbent, J. (2019). Turning self-assessment into self-feedback. The impact of feedback in higher education: Improving assessment outcomes for learners, 147–163.
Panadero, E. Toward a paradigm shift in feedback research: Five further steps influenced by self-regulated learning theory. Educational Psychologist; 2023; 58,
Pardo, A; Jovanovic, J; Dawson, S; Gašević, D; Mirriahi, N. Using learning analytics to scale the provision of personalised feedback. British Journal of Educational Technology; 2019; 50,
Pecaric, M; Boutis, K; Beckstead, J; Pusic, M. A big data and learning analytics approach to process-level feedback in cognitive simulations. Academic Medicine; 2017; 92,
Praetorius, AK; Klieme, E; Herbert, B; Pinger, P. Generic dimensions of teaching quality: The German framework of three basic dimensions. Zdm; 2018; 50, pp. 407-426.
Radkowitsch, A; Vogel, F; Fischer, F. Good for learning, bad for motivation? A meta-analysis on the effects of computer-supported collaboration scripts. International Journal of Computer-Supported Collaborative Learning; 2020; 15, pp. 5-47.
Rienties, B; Køhler Simonsen, H; Herodotou, C. Defining the boundaries between artificial intelligence in education, computer-supported collaborative learning, educational data mining, and learning analytics: A need for coherence. Frontiers in Education; 2020; [DOI: https://dx.doi.org/10.3389/feduc.2020.00128]
Ross, SM; Morrison, GR; Lowther, DL. Educational technology research past and present: Balancing rigor and relevance to impact school learning. Contemporary educational technology; 2010; 1,
Ryan, RM; Deci, EL. Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemporary Educational Psychology; 2000; 25,
Ryan, T; Gašević, D; Henderson, M. Identifying the impact of feedback over time and at scale: Opportunities for learning analytics. The impact of feedback in higher education: Improving assessment outcomes for learners; 2019; Springer International Publishing: pp. 207-223.
Sadler, DR. Formative assessment and the design of instructional systems. Instructional Science; 1989; 18, pp. 119-144.
Saqr, M; López-Pernas, S. How CSCL roles emerge, persist, transition, and evolve over time: A four-year longitudinal study. Computers and Education; 2022; 189, 104581.
Song, BK. Bifactor modelling of the psychological constructs of learner feedback literacy: Conceptions of feedback, feedback trust and self-efficacy. Assessment and Evaluation in Higher Education; 2022; 47,
Strijbos, JW; Pat-El, R; Narciss, S. Structural validity and invariance of the feedback perceptions questionnaire. Studies in Educational Evaluation; 2021; 68, 100980.
Strijbos, JW; Weinberger, A. Emerging and scripted roles in computer-supported collaborative learning. Computers in Human Behavior; 2010; 26,
Susnjak, T; Ramaswami, GS; Mathrani, A. Learning analytics dashboard: A tool for providing actionable insights to learners. International Journal of Educational Technology in Higher Education; 2022; 19,
Sutton, P. Conceptualizing feedback literacy: Knowing, being, and acting. Innovations in Education and Teaching International; 2012; 49,
Tempelaar, D; Rienties, B; Giesbers, B. Dispositional learning analytics and formative assessment: An inseparable twinship. International Journal of Educational Technology in Higher Education; 2024; 21,
Theobald, M. Self-regulated learning training programs enhance university students' academic performance, self-regulated learning strategies, and motivation: A meta-analysis. Contemporary Educational Psychology; 2021; 66, 101976.
Tsai, YS. Why feedback literacy matters for learning analytics. International Conference of the Learning Sciences 2022; 2022; International Society of the Learning Sciences: pp. 27-34.
Van der Kleij, FM; Feskens, RC; Eggen, TJ. Effects of feedback in a computer-based learning environment on students’ learning outcomes: A meta-analysis. Review of Educational Research; 2015; 85,
Vogel, F; Weinberger, A; Fischer, F. Collaboration scripts: Guiding, internalizing, and adapting. International handbook of computer-supported collaborative learning; 2021; Springer International Publishing: pp. 335-352.
Weaver, MR. Do students value feedback? Student perceptions of tutors’ written responses. Assessment and Evaluation in Higher Education; 2006; 31, pp. 379-394.
Weidlich, J; Fink, A; Jivet, I; Yau, J; Giorgashvili, T; Drachsler, H; Frey, A. Emotional and motivational effects of automated and personalized formative feedback: The role of reference frames. Journal of Computer Assisted Learning; 2024; 40,
Weidlich, J; Gašević, D; Drachsler, H. Causal inference and bias in learning analytics: A primer on pitfalls using directed acyclic graphs. Journal of Learning Analytics; 2022; 9,
Weidlich, J; Jivet, I; Woitt, S; Orhan Göksün, D; Kraus, J; Drachsler, H. The student feedback literacy instrument (SFLI): Multilingual validation and introduction of a short-form version. Assessment and Evaluation in Higher Education; 2025; [DOI: https://dx.doi.org/10.1080/02602938.2025.2451729]
Wiliam, D. Feedback: At the heart of-but definitely not all of-formative assessment. The Cambridge handbook of instructional feedback; 2018; Cambridge University Press: pp. 3-28.
Wilson, A; Watson, C; Thompson, TL; Drew, V; Doyle, S. Learning analytics: Challenges and limitations. Teaching in Higher Education; 2017; 22,
Winne, PH; Hadwin, AF. Hacker, D; Dunlosky, J; Graesser, A. Studying as self-regulated learning. Metacognition in Educational Theory and Practice; 1998; Erlbaum: pp. 277-304.
Winstone, NE; Boud, D. The need to disentangle assessment and feedback in higher education. Studies in Higher Education; 2022; 47,
Winstone, NE; Hepper, EG; Nash, RA. Individual differences in self-reported use of assessment feedback: The mediating role of feedback beliefs. Educational Psychology; 2021; 41,
Winstone, NE; Mathlin, G; Nash, RA. Building feedback literacy: Students’ perceptions of the developing engagement with feedback toolkit. Frontiers in Education; 2019; [DOI: https://dx.doi.org/10.3389/feduc.2019.00039]
Winstone, NE; Nash, RA; Parker, M; Rowntree, J. Supporting learners' agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist; 2017; 52,
Winstone, NE; Nash, RA. Toward a cohesive psychological science of effective feedback. Educational psychologist; 2023; 58,
Wisniewski, B; Zierer, K; Hattie, J. The power of feedback revisited: A meta-analysis of educational feedback research. Frontiers in Psychology; 2020; 10, 487662.
Woitt, S; Weidlich, J; Jivet, I; Orhan Göksün, D; Drachsler, H; Kalz, M. Students’ feedback literacy in higher education: An initial scale validation study. Teaching in Higher Education; 2025; 30,
Yildiz, H; Bozpolat, E; Hazar, E. Feedback literacy scale: A study of validation and reliability. International Journal of Eurasian Education and Culture; 2022; 7,
Zawacki-Richter, O; Marín, VI; Bond, M; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education–where are the educators?. International Journal of Educational Technology in Higher Education; 2019; 16,
Zhan, Y. Developing and validating a student feedback literacy scale. Assessment and Evaluation in Higher Education; 2022; 47,
Zimbardi, K; Colthorpe, K; Dekker, A; Engstrom, C; Bugarcic, A; Worthy, P; Long, P. Are they using my feedback? The extent of students’ feedback use has a large impact on subsequent academic performance. Assessment and Evaluation in Higher Education; 2017; 42,
Zimmerman, BJ. Attaining self-regulation: A social cognitive perspective. Handbook of self-regulation; 2000; Academic press: pp. 13-39.
Winstone, NE; Nash, RA. Toward a cohesive psychological science of effective feedback. Educational psychologist; 2023; 58,
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.