Content area
Student feedback literacy is essential to academic writing in an English as a foreign language (EFL) context. However, there is a scarcity of valid and reliable measures to assess student English writing feedback literacy (SEWFL), especially in the context of peer assessment. To address this gap, this study developed and validated a SEWFL scale in peer assessment with 937 Cambodian university students. The scale comprised two key components: feedback competencies and dispositions. Feedback competencies encompassed students' skills in eliciting, processing, giving, and enacting peer feedback, while feedback dispositions involved students’ appreciation for giving and receiving feedback, readiness to engage and commitment to change. Confirmatory factor analysis (CFA) and multi-group CFA confirmed the scale’s validity and demonstrated its structural consistency across students of different genders and academic majors. Additionally, the Pearson correlation analysis revealed significant correlations between the eight subdimensions of SEWFL and students’ English writing goals. The SEWFL scale provides a discipline-and-context-specific measure for researchers and teachers to better understand EFL students’ SEWFL.
Introduction
Feedback has been recognized as a crucial component in improving the quality of students’ English academic writing (Hyland & Hyland, 2019). However, students benefit greatly from feedback, not just in how teachers provide it but, more importantly, in how they respond to it (Zhang & Mao, 2023). Therefore, there has been a growing emphasis on students taking an active role in the feedback process. This feedback paradigm change has led to heated theoretical discussion and empirical investigation into what feedback competencies and dispositions are necessary for students to play their agency in the feedback process, which has been conceptualized as student feedback literacy (SFL) (Zhan, 2022; Carless & Boud, 2018). However, most of these discussions and explorations have been confined to general learning contexts in higher education (e.g., Zhan, 2022, 2023; Dawson et al., 2024; Han & Xu, 2021; Molloy et al., 2020; Nieminen & Carless, 2023).
Recently, discipline-specific SFL has been highlighted because disciplinary tools and practices not only influence the form and content of feedback but also shape the way individuals understand and apply feedback (Winstone et al., 2022). Although certain feedback competencies and dispositions may be universal, adaptable, and relevant across various fields, much of SFL must be contextualized and aligned with discipline-specific demands to be fully comprehended and applied effectively (Malecka et al., 2022). In EFL academic writing contexts, corrective feedback (Lee, 2013; Sarré et al., 2021; Van Beuningen, 2010) and a process-oriented approach to writing (Conijn et al., 2022; Teng, 2022) are distinct disciplinary characteristics that impose specific demands on SFL.
SFL was initially proposed by scholars considering students’ agency in processing the feedback provided by teachers (e.g., Carless & Boud, 2018; Molloy et al., 2020; Dawson et al., 2024). Peer assessment is a particular learner-centred feedback practice which requires students’ active roles in both giving and receiving feedback (Zhan 2024; Dong et al., 2023). In academic writing, peer assessment has increasingly been utilized by scholars and teachers as an effective method to support the drafting and revising phases of academic writing (Yu & Lee, 2016). However, SFL in peer assessment remains relatively underexamined. This is noteworthy, as peer assessment often demands specific attitudes and feedback-giving skills from students that may not be as essential in teacher-led feedback scenarios (Dong et al., 2023).
We argue that SFL needs to be specified considering the characteristics of English academic writing discipline and be situated in peer assessment contexts. The existing scales of SFL are more applicable for general learning and teacher-led feedback contexts and cannot fulfil such research needs. Therefore, this study sought to specify the framework of student English writing feedback literacy (SEWFL) in peer assessment and then develop and validate a SEWFL scale with Cambodian university students.
Literature review
SFL and SEWFL
A meta-analysis of student engagement in feedback processes, spanning studies from 1973 to 2019, reveals a shift from traditional information-delivery models toward more student-centered practices (Van der Kleij et al., 2019). This shift toward learner-centered feedback has spurred growing academic interest in the role of SFL (Carless & Boud, 2018; Molloy et al., 2020; Nieminen & Carless, 2023). The majority of research has conceptualized SFL as the combination of competencies and dispositions necessary for university students to effectively engage with and apply feedback to enhance their learning (e.g., Zhan, 2022; Carless & Boud, 2018; Malecka et al., 2022; Molloy et al., 2020; Song, 2022).
SFL was first introduced by Sutton (2012) who viewed it as an integral component of academic literacies, which consisted of an epistemological dimension (understanding feedback), an ontological dimension (self-confidence), and a practical dimension (acting upon feedback). Carless and Boud (2018) expanded this idea and defined SFL as the “understanding, capacities, and dispositions required to interpret information and utilize it to enhance work or learning strategies” (p.1315). They identified four core characteristics of feedback-literate students: “appreciating feedback processes,” “making judgments,” “managing affect,” and “taking action” (p. 1319). Molloy et al., (2020, p.528) focused on “students’ ability to understand, utilise and benefit from feedback processes” to frame SFL. They developed a learning-centered framework that extended Carless and Boud’s (2018) conceptualization of SFL by emphasizing students’ feedback-seeking behaviors, reciprocal engagement, active roles, and preparation for lifelong learning.
Yu and Liu (2021) situate SFL within the context of academic writing, emphasizing that feedback-literate students are capable of interpreting context-specific writing feedback, regulating their emotional responses to critical and complex feedback, and applying reasoning and critical thinking to assess and utilize feedback effectively. Han and Xu (2020, p. 682) conceptualize SFL in English academic writing for peer assessment as “students’ cognitive and social-affective readiness to provide and use feedback”. Cognitive readiness involves the mental skills and understanding required to participate in peer assessment activities, while social-affective readiness encompasses the attitudes, emotions, and abilities necessary for fully engaging in these activities.
The existing SFL and SEWFL scales
Several studies suggest that feedback-literate students are psychologically capable individuals who can properly utilize feedback to prepare for a better life and career (Dawson et al., 2021; Winstone et al., 2022). Therefore, feedback literacy can be considered a psychological construct“that can be measured, tracked and developed” (Nieminen & Carless, 2023, p. 1390). We systematically searched EBSCO, Scopus and Web of Science using the keywords (i.e., feedback literacy, measure, assess, instrument, scale) in March 2025. We finally identified 11 SFL scales. In the existing scales, only four scales are related to SFL in the writing contexts, and the rest of them are about SFL in general learning contexts, except for the scale developed by Chen et al. (2024) in mathematics discourse. Table 1 summarizes the existing SFL scales.
Table 1. Summary of the existing SFL scales
Study | Scale | Dimensions |
|---|---|---|
Song (2022) | Learner feedback literacy | • Conceptions of feedback • Feedback trust • Self-efficacy |
Yildiz et al. (2022) | Feedback literacy | • Appreciation • Positive attitude • Awareness towards effective feedback • Openness to use feedback |
Yu et al. (2022) | L2 student writing feedback literacy | • Appreciating feedback • Acknowledging different feedback sources • Making judgments • Managing affect • Taking action |
Zhan (2022) | Student feedback literacy | • Capacities in eliciting feedback • Capacities in processing feedback • Capacities in enacting feedback • Appreciation of feedback • Readiness to engage • Commitment to change |
Dong et al. (2023) | Peer feedback literacy in writing | • Feedback-related knowledge and abilities • Willingness to participate • Appreciation of peer feedback • Cooperative learning ability |
Woitt et al. (2023) | Student feedback literacy in higher education | • Feedback attitudes • Feedback practices |
Zhang et al. (2023) | L2 secondary student writing feedback literacy | • Using feedback • Evaluating feedback |
Chen et al. (2024) | Mathematics discourse feedback skills | • Comparative analysis • Expressing communication • Mathematical reasoning • Monitor and adjust • Diagnostic evaluation • Implementation capacity |
Dawson et al. (2024) | Feedback literacy behaviour | • Seeking feedback information • Making sense of information • Using feedback information • Providing feedback information • Managing affect |
Teng & Ma (2024) | Metacognition-based student feedback literacy | • Knowing • Being • Doing |
Weidlich et al. (2025) | Multilingual student feedback literacy instrument | • Feedback attitudes • Feedback practices |
In general learning contexts, Song (2022) and Yildiz et al. (2022) focused on students’ feedback disposition, trust and understanding, while Dawson et al. (2024) emphasized students’ feedback behaviour. Zhan (2022), Woitt et al. (2023) and Weidlich et al. (2025) had a balanced view of SFL, which consists of both feedback dispositions and feedback competencies. Different from the scales of Woitt et al. (2023) and Weidlich et al. (2025), Zhan’s (2022) scale outlines feedback competencies aligned with the different stages of the feedback process (i.e., eliciting, processing and enacting) which is based on the beliefs that different stages of the feedback process require students’ distinct feedback competencies to achieve their purposes (Malecka et al., 2022).
In the writing contexts, four scales have been validated so far. Yu et al. (2022) created an L2 writing feedback literacy scale with five dimensions (appreciating feedback, acknowledging sources, making judgments, managing affect, and acting), which is invariant across genders, grades, and disciplines. Zhang et al. (2023) adapted this scale for secondary students, identifying two factors: using feedback and evaluating feedback, with the latter showing lower factor loadings due to students’ reliance on teacher authority. Teng and Ma (2024) focused on metacognition-based SFL for academic writing and developed a scale that highlighted metacognitive awareness and skills.
The scale developed by Dong et al. (2023) is highly relevant to this validation study, as it focuses on peer feedback literacy in academic writing and includes four key dimensions: feedback-related knowledge and abilities, cooperative learning, appreciation, and willingness to participate. However, there are certain aspects of the scale that may not fully capture the defining features of SEWFL in peer assessment contexts. For instance, combining feedback knowledge and abilities into a single dimension could lead to some ambiguity. Additionally, while the scale addresses students’ abilities to provide feedback to peers, it does not thoroughly explore their capacity to process and act on feedback. Furthermore, although the scale incorporates willingness to participate and appreciation of peer feedback, it does not explicitly consider students’ commitment to making changes based on feedback or distinguish between the value of giving and receiving feedback. These considerations highlight the need for a more comprehensive and refined scale to better assess SEWFL in peer assessment settings.
Conceptual framework of SEWFL in peer assessment
A conceptual framework of SEWFL in peer assessment was developed mainly based on Zhan's (2022) framework of SFL and referred to other SFL frameworks, especially those in English academic writing contexts (e.g., Dawson et al., 2024; Dong et al., 2023; Han & Xu, 2021; Teng & Ma, 2024; Yu & Liu, 2021; Yu et al., 2022). It has two major components, namely feedback competencies and dispositions. Figure 1 outlines the framework of SEWFL in peer assessment.
[See PDF for image]
Fig. 1
The framework of SEWFL in peer assessment
The feedback competencies are categorized according to specific feedback stages of eliciting, processing/giving and enacting. In the eliciting stage, students often use two main strategies to seek feedback: inquiry and monitoring. Inquiry involves directly asking others, like peers, for their opinions on their work. Monitoring happens when students look for clues or cues—such as assessment criteria, examples, or conversations—to judge their progress, especially when they don't get direct feedback. Unlike the framework proposed by Zhan (2022), which focuses on students receiving feedback, this framework operates within a peer assessment context where feedback is a reciprocal process. In this setting, students actively receive feedback from peers and also generate feedback for others (Li et al., 2012). Consequently, students engage in both processing and giving feedback. In the processing stage, students need to understand and evaluate it to decide what actions to take. This requires good sense-making skills and the ability to judge the quality of feedback. Meanwhile, they also need to have corresponding competencies of evaluative judgement and giving actionable suggestions to their peers. In the enacting stage, students need to apply feedback to improve their work, which completes the feedback cycle. To do this, they should develop skills like self-regulation, allowing them to set or change goals based on their reflections and others’ suggestions.
The feedback dispositions encompass appreciation for feedback both in giving and receiving, readiness to engage and commitment to change. Zhan (2024) found that students showed emotional resistance to peer assessment while providing feedback to their peers. This resistance was partly due to their limited awareness of the educational benefits associated with giving feedback to peers. Thus, within this framework, students' appreciation for feedback is further divided into appreciation for giving feedback to peers and appreciation for receiving peer feedback. As for students’ readiness to engage in the peer assessment process, students should control their emotions to handle difficult or complex feedback and be ready to admit mistakes, learn from peers, and accept helpful advice (Han & Xu, 2021; Yu & Liu, 2021). Therefore, they need to be open to critical comments as well as respect others’ opinions even if they don’t agree with their peers. Students’ commitment to making changes represents their volition and investment of time and energy to follow up on the received peer feedback to refine their work.
This framework has distinctive disciplinary features of academic writing. In the context of EFL academic writing, corrective feedback is commonly used (Sarré et al., 2021; Van Beuningen, 2010). Corrective feedback concerns with which errors to correct. Lee (2013) classified the errors into global (i.e., errors relating to writing ideas and organization) and local ones (i.e., errors relating to language accuracy). It was found that students tended to revise local errors in corrective feedback, which is regarded as a superficial feedback engagement (Zheng & Yu, 2018). To enhance student feedback engagement, students need to deeply process global errors and make revisions, which pose specific requirements on students’ processing and enacting competencies. Additionally, English academic writing involves a series of goal-oriented, iterative processes that necessitate students to plan, monitor, and assess their writing activities (Teng & Ma, 2024). Metacognitive regulation is perceived as ‘potent catalysts for developing competence and promoting performance in writing’ (Harris et al., 2010, p. 231). Therefore, when students enact peer feedback, they need to require metacognitive skills such as establishing realistic writing goals, making a feasible revision plan, monitoring their revision progress and evaluating and reflecting on their revisions.
Methods
Participants
Participants for the scale validation were selected using convenience sampling (Etikan et al., 2016). This study was conducted in college English writing courses at a Cambodian university. With assistance from the university administrators, an email survey invitation was distributed to all enrolled students. 1020 students agreed to participate, and ultimately, 937 valid responses were collected. Among these participants, 338 were men (36.4%) and 591 were women (63.6%). Of the total students, 341 (36.7%) were business majors, while 587 (63.3%) pursued other majors.
Developing procedure
Figure 2 outlines the sequential steps for scale development, suggested by DeVellis and Thorpe (2021). The process began with a critical review of existing literature on SFL. The discussions on SFL, particularly those developed for English academic writing contexts (e.g., Zhan, 2022; Dawson et al., 2024; Dong et al., 2023; Han & Xu, 2021; Teng & Ma, 2024; Yu et al., 2022; Yu & Liu, 2021) were scrutinized and synthesized. After that, a systematic review of the scales of SFL was conducted as reported previously. These reviews helped to determine the essential components of SEWFL, as shown by Fig. 1 and discussion of the conceptual framework of SEWFL. Under each component, items were written by referring to some related scales such as Zhan (2022), Dong et al. (2023), Teng & Ma (2024), Yu et al. (2022), Zhang et al. (2023). A total of 48 items were generated to form the item pool of the scale. The initial scale was improved through expert evaluations and students' think-aloud activities. Two feedback specialists were asked to assess the items for face validity, content validity, readability, relevance, and potential bias. Based on their feedback, six items were deleted due to redundancy. For example, the item “I am good at discussing with others about English writing assignments”. In addition, the wording of some items was revised to strengthen their content validity. For example, the item “I am good at judging whether the received comments are reasonable” has been changed into “I am good at judging against English writing criteria whether the received comments are reasonable” to make the meaning clearer. Following that, the English version of the scale was translated into Khmer. Eight volunteer Cambodian university students were asked to complete the scale and participate in a think-aloud activity, following the methodology outlined by Koskey (2016), to evaluate item relevance, clarity, and wording consistency. The scale was subsequently refined further based on the insights gathered from the think-aloud activities.
[See PDF for image]
Fig. 2
The process of developing the SEWFL scale
Construct validation approach for SEWFL scale
A construct validation approach including within-network and between-network examinations was used in this study. Within-network examination uses reliability and confirmatory factor analyses (CFA) to explore the dimensional structure of the scale. This within-network examination can guarantee the construct validity and internal consistency of the scale (Marsh, 2002). Between-network examination performs correlation analyses to investigate the correlations between the scale and other constructs theoretically related to the scale (Martin, 2007). In this study, we explored the correlation between SEWFL and writing motivation, which helped to ensure the external validity of the SEWFL scale. Learning motivation is closely related to SFL (Han & Xu, 2021). A few studies have demonstrated the correlation between SFL and learning motivation. For example, Leenknecht et al. (2019) found that mastery goals had a stronger association with feedback-seeking behaviour (i.e., feedback competencies in SFL) compared to performance goals. Research by Winstone et al. (2021) revealed that students with mastery-approach goals were more likely to recognise the value of feedback in enhancing their learning (i.e., feedback disposition in SFL). In a recent study, Zhan (2022) demonstrated that students’ intrinsic motivation had a stronger relationship with SFL than extrinsic motivation. Similarily. Zhang et al. (2024) found that intrinsic motivation facilitated SFL development.
Measures
A 42-item scale, organized into eight subdimensions across two dimensions, was developed to investigate SEWFL in peer assessment contexts. Among the eight subdimensions, four are related to the students’ feedback competencies for a) eliciting (EL) (6 items), b) processing (PC) (6 items), c) giving (GC) (5 items), and d) enacting (EC) (7 items). The other four dimensions are concerned with the students’ feedback dispositions in terms of e) appreciation for receiving feedback (AFR) (4 items), f) appreciation for giving feedback (AFG) (4 items), g) readiness to engage (RE) (6 items), and h) commitment to change (CC) (4 items). Sample items are listed in Table 2. The survey items were rated on a 6-point positively packed response Likert scale (1 = ‘strongly disagree’, 2 = ‘mostly disagree’, 3 = ‘slightly agree’, 4 = ‘moderately agree’, 5 = ‘mostly agree’ and 6 = ‘strongly agree’). This kind of Likert scale could help reduce the influence of positive conformity and capture a broader range of participant responses than a conventional Likert-type scale (Brown, 2004).
Table 2. Sample items of each subdimension of SEWFL
SEWFL | Subdimensions | Sample item |
|---|---|---|
Feedback competencies | Eliciting | I am good at seeking out good English writing exemplars to make sense of the standards of English writing |
Processing | I am good at interpreting the comments regarding my writing content and structure | |
Giving | I am good at judging the quality of others’ writing according to writing criteria | |
Enacting | I am good at modifying my writing content and structure if necessary | |
Feedback dispositions | Appreciation for giving feedback | I have realized that giving feedback on others’ writing can enable me to learn from their writing strengths and avoid their writing mistakes |
Appreciation for receiving feedback | I have realized that others’ feedback can help me to improve my English writing performance | |
Readiness to engage | I am always ready to take the comments that directly point out my writing mistakes | |
Commitment to change | I am always willing to spend spare time finding additional English writing resources to refine my writing tasks |
The subdimensions vary in item count (4–7 items) to align with their theoretical scope and operational demands. For instance, ‘Enacting’ (7 items) encompasses broader metacognitive and behavioral processes, whereas ‘Giving’ (5 items) targets specific evaluative skills. All subdimensions exhibited high internal consistency (α >.80), confirming reliability despite differing item numbers. This approach ensures parsimony while comprehensively measuring each construct (DeVellis & Thorpe, 2021).
This study also assessed three dimensions of students’ English writing motivation: mastery, performance-approach and performance-avoidance goals to do between-network examination. The motivation scale was designed and adapted based on the work of Ling et al. (2021). Each dimension consists of 4 items. The mastery goal refers to students’ desire to develop and improve their English writing competence and skills (e.g., “When I am writing, I am trying to master writing strategies as thoroughly as possible”). The performance-approach goal indicates students’ focus on demonstrating their English writing ability and outperforming others (e.g., “When I am writing, I am trying to have my classmates believe I can write well”). The performance-avoidance goal involves students’ intentions to avoid appearing incompetent or performing worse than their peers in English writing. (e.g., “When I am writing, I am trying to avoid doing poorly compared to other students in my English writing classes”). Similar to the SEWFL scale, all items were rated on a 6-point positively packed response scale.
Statistical analyses
Prior to conducting the statistical analyses, the following steps were taken to ensure the quality and integrity of the data. First, we examined the dataset for missing values and found that less than 1% of responses were incomplete. These missing values were handled using listwise deletion, as the amount was negligible and unlikely to bias the results. Second, we screened for univariate and multivariate outliers using standardized z-scores (threshold: ± 3.29) and Mahalanobis distance (p <.001), respectively. No extreme outliers were identified. Finally, we checked for random or careless responses by analyzing response patterns (e.g., straight-lining or inconsistent answers). No participants were removed, as no such patterns were detected. To assess the normality of the data, we conducted the Skewness and Kurtosis analyses. The absolute values of skewness (< 2) and kurtosis (< 7) for all items fell within acceptable ranges, indicating no severe deviations from normality.
To check the within-network validity of the SEWFL scale, Cronbach’s α coefficients were first calculated for the eight subdimensions. Confirmatory factor analysis (CFA) of the second-order model was then conducted based on feedback competencies and dispositions to examine the dimensional structure of the scale. Multi-group second-order factor analysis tested whether the eight subdimensions held the same structure across student genders and majors. First, a baseline model without equality constraints was examined. Then, increasingly strict constraints were applied across six additional models. These analyses were conducted separately for gender and major groups.
To examine the between-network validity, the entire sample was used to compute the correlations between the eight subdimensions of SEWFL and the students’ writing goals, including their mastery goals, performance-approach goals, and performance-avoidance goals. The uncorrected correlation coefficients may be weakened by measurement error; therefore, the disattenuated correlation coefficients were also calculated in this study, assuming that the correlated constructs were assessed with perfect reliability (Muchinsky, 1996).
Results
Descriptive statistics
The descriptive statistics and reliability coefficients for the eight subdimensions of the SEWFL scale are presented in Table 3. All of the α coefficients were above.80, which suggests the high reliability of each subdimension. Table 3 shows Pearson correlation coefficients between each pair of subdimensions. The correlations range from 0.522 to 0.777, indicating moderate to strong positive relationships. Table 3 reveals that the participants’ feedback dispositions were generally higher than their feedback competencies, with the subdimension of readiness to engage having the highest mean (M = 4.032) and the subdimension of giving feedback competencies having the lowest mean (M = 3.418).
Table 3. Descriptive and correlational statistics of the SEWFL scale
Alpha | Mean | SD | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | |
|---|---|---|---|---|---|---|---|---|---|---|---|
1. EL | .852 | 3.695 | .943 | - | |||||||
2. PC | .857 | 3.494 | .911 | .752** | - | ||||||
3. GC | .841 | 3.418 | .953 | .652** | .777** | - | |||||
4. EC | .880 | 3.500 | .903 | .704** | .773** | .776** | - | ||||
5. AFR | .837 | 4.023 | .995 | .639** | .618** | .576** | .653** | - | |||
6. AFG | .830 | 3.989 | .990 | .637** | .613** | .571** | .637** | .805** | - | ||
7. RE | .854 | 4.032 | .941 | .596** | .583** | .522** | .612** | .739** | .724** | - | |
8. CC | .843 | 3.884 | .991 | .629** | .622** | .565** | .664** | .698** | .681** | .720** | - |
**p <.01
Within-network construct validity
The results supported the validity of the SEWFL scale (χ2 = 2452.419, df = 810; χ2/df = 3.028; p <.001; RMSEA =.047; CFI =.927; PNFI =.842) (Fig. 3). CFI was >.90, RMSEA was <.08, and PNFI was >.50, showing that the scale’s eight subdimensions obtained clear and significant results in the second-order CFA.
[See PDF for image]
Fig. 3
CFA of 42 items of the SEWFL scale
Multi-group CFAs were conducted for male and female students and for students in different majors. As Cheung and Rensvold (2002) suggested, the chi-square test for variance may be stringent, and a .01 decrease in CFI would be appropriate to indicate a lack of invariance across multiple groups in SEM. Given the large sample size of this study, we used changes in CFI as an indicator of the differences between models.
The first multi-group CFA was designed to examine whether the structure of the SEWFL scale was consistent across male and female participants (see Table 4). The results indicated that all seven models demonstrated a good fit, with RMSEA values less than.08, PNFI values greater than.50, and CFI values greater than.90. Firstly, no equality constraints were imposed in the baseline model (M1). Equality constraints were then imposed on the measurement weights (M2), resulting in no change in CFI (ΔCFI =.000), indicating the consistency of the measurement weights across male and female students. Further constraints were imposed on the measurement weights and measurement intercepts (M3), with a minimal change in CFI (ΔCFI =.002), which is less than.01. Subsequently, equality constraints were imposed on the measurement weights, measurement intercepts, and structural covariances (M4), with the difference in CFI between M3 and M4 being.000. Additional constraints were applied in models M5, M6, and M7, with changes in CFI all less than.01. In conclusion, these results suggest the consistency of the structure of the SEWFL across male and female students.
Table 4. Invariance tests across students differing in gender and major
df | /df | P | RMSEA | PNFI | CFI | Change in CFI | Change in | Change in df | ||
|---|---|---|---|---|---|---|---|---|---|---|
Invariance across males and females | ||||||||||
M1 Baseline model (Unconstraint imposed) | 3641.669 | 1620 | 2.248 | P <.001 | .037 | .801 | .911 | - | - | - |
M2 Invariant measurement weights | 3687.719 | 1654 | 2.230 | P <.001 | .036 | .816 | .911 | .000 | 46.050 | 34 |
M3 Invariant measurement intercepts | 3776.151 | 1696 | 2.227 | P <.001 | .036 | .833 | .909 | .002 | 88.432 | 42 |
M4 Invariant structural weights | 3776.686 | 1702 | 2.219 | P <.001 | .036 | .836 | .909 | .000 | 0.535 | 6 |
M5 Invariant structural covariances | 3789.416 | 1705 | 2.223 | P <.001 | .036 | .837 | .908 | .001 | 12.730 | 3 |
M6 Invariant structural residuals | 3817.447 | 1713 | 2.229 | P <.001 | .036 | .840 | .908 | .000 | 28.031 | 8 |
M7 Invariant Measurement residuals | 3865.926 | 1755 | 2.203 | P <.001 | .036 | .858 | .907 | .001 | 48.479 | 42 |
Invariance across business major and other majors | ||||||||||
M1 Baseline model (Unconstraint imposed) | 3730.775 | 1620 | 2.303 | P <.001 | .038 | .799 | .908 | - | - | - |
M2 Invariant measurement weights | 3780.473 | 1654 | 2.286 | P <.001 | .037 | .814 | .907 | .001 | 49.698 | 34 |
M3 Invariant measurement intercepts | 3853.597 | 1696 | 2.272 | P <.001 | .037 | .831 | .906 | .001 | 73.124 | 42 |
M4 Invariant structural weights | 3857.087 | 1702 | 2.266 | P <.001 | .037 | .834 | .906 | .000 | 3.490 | 6 |
M5 Invariant structural covariances | 3865.496 | 1705 | 2.267 | P <.001 | .037 | .835 | .906 | .000 | 8.409 | 3 |
M6 Invariant structural residuals | 3879.392 | 1713 | 2.265 | P <.001 | .037 | .839 | .906 | .000 | 13.896 | 8 |
M7 Invariant Measurement residuals | 3965.697 | 1755 | 2.260 | P <.001 | .037 | .856 | .904 | .002 | 86.305 | 42 |
The second multi-group CFA was conducted to compare business majors with non-business majors (see Table 4). The results indicated that all seven models demonstrated a good fit, with RMSEA values less than.08, PNFI values greater than.50, and CFI values greater than.90. Additionally, the changes in CFI between each pair of contiguous models were all less than .01. These results suggest the consistency of the structure of the SEWFL scale across both business and non-business majors.
Between-network construct validity
To conduct between-network validity, this study investigated the relationships between the eight subdimensions of the SEWFL scale and three dimensions of students’ English writing goals. All eight subdimensions were significantly positively correlated with the three types of writing goals (see Table 5). The Pearson correlation coefficients ranged from.439 to.715, while the disattenuated correlation coefficients ranged from.550 to.890. As indicated in Table 5, SEWFL had a stronger correlation with the students’ mastery goal (average Pearson’s r =.637; average disattenuated r =.760) than with their performance-approach goal (average Pearson’s r =.540; average disattenuated r =.645) and performance-avoidance goal (average Pearson’s r =.492; average disattenuated r =.582).
Table 5. Zero-order correlations among SEWFL and students’ English writing goals
Mastery goals | Performance approach goals | Performance avoidance goals | Mean | |||||
|---|---|---|---|---|---|---|---|---|
Pearson r | Disattenuated r | Pearson r | Disattenuated r | Pearson r | Disattenuated r | Pearson r | Disattenuated r | |
1. EL | .619** | .772** | .502** | .625** | .474** | .569** | .532** | .655** |
2. PC | .623** | .703** | .551** | .666** | .495** | .621** | .553** | .663** |
3. GC | .583** | .746** | .552** | .660** | .525** | .587** | .556** | .664** |
4. EC | .653** | .758** | .579** | .588** | .552** | .510** | .531** | .619** |
5. AFR | .650** | .771** | .504** | .684** | .439** | .650** | .595** | .702** |
6. AFG | .656** | .780** | .531** | .633** | .491** | .585** | .559** | .666** |
7. RE | .659** | .774** | .532** | .629** | .455** | .537** | .547** | .647** |
8. CC | .715** | .777** | .571** | .676** | .504** | .597** | .577** | .683** |
Mean | .637** | .760** | .540** | .645** | .492** | .582** | .556** | .662** |
**p <.01
Discussion
The SEWFL scale was validated and confirmed across students of different genders and majors. The Cronbach’s α coefficients for all eight subdimensions exceeded.80, indicating good reliability. The CFA outcomes endorsed the proposed the second-order-eight dimension structure of the scale. The CFA results supported the proposed second-order, eight-dimension structure of the scale. Between-network testing showed that the eight subdimensions were significantly linked to students’ writing goals, with the strongest connection to mastery goals. This aligns with the studies which demonstrated a positive relationship between SFL and learning motivation (e.g., Zhan, 2022; Leenknecht et al., 2019; Winstone et al., 2021; Zhang et al., 2024).
While most existing SFL scales measure feedback competencies and dispositions in a general education context (e.g., Zhan, 2022; Dawson et al., 2024; Song, 2022), the SEWFL scale stands out by focusing specifically on English academic writing context. It was designed to address corrective feedback practices in English academic writing and the metacognitive skills required for a process-oriented writing approach. It emphasizes students’ ability to deeply process global feedback, such as content and organization, as well as their ability to provide such feedback. These specific competencies, tied to corrective feedback, are not explicitly addressed in existing scales for English writing (e.g., Dong et al., 2023; Teng & Ma, 2024; Yu et al., 2022; Zhang et al., 2023). The scale also highlights metacognitive skills, aligning with the metacognition-based SFL scale developed by Teng and Ma (2024). Additionally, it incorporates management skills, such as writing anxiety and time management, which are closely linked to metacognitive strategies.
Contextualized within peer assessment, the SEWFL scale recognizes students’ reciprocal roles, requiring both giving and receiving feedback. While Dong et al. (2023) included skills for giving feedback, their scale did not sufficiently address students’ ability to process received feedback. The SEWFL scale bridges this gap by equally emphasizing both skills. It also highlights students’ dispositions, such as appreciation for giving feedback and commitment to change, which were overlooked by Dong et al. (2023).
The SEWFL survey revealed that Cambodian university students demonstrated stronger feedback dispositions than competencies, a finding consistent with Zhan et al.’s (2025) results. Specifically, participants’ appreciation for giving feedback was lower than their appreciation for receiving feedback. Some studies have demonstrated that students welcome receiving peer feedback to refine their work, but are less aware of the benefit of giving feedback to peers (Zhan, 2024; Van Popta et al., 2017). Students’ readiness to engage in peer assessment was high in the disposition dimension, which revealed that their equal status in learning helped to overcome the emotional resistance usually existing in the hierarchical structure of teacher and student (Zhan, 2019). In terms of competencies, participants’ ability to give feedback was the weakest, likely due to teacher-centered feedback practices in Cambodian universities, which limit opportunities for students to provide feedback (Zhan et al., 2025; Ngoun, 2013). In addition, traditional Cambodian culture emphasizes hierarchy, reciprocity, and social harmony, heavily influenced by Buddhist values (Berkvens, 2017). Therefore, students may feel uncomfortable critiquing others, which can suppress their role as feedback givers and further hinder their development of the ability to provide constructive feedback.
Conclusion and implications
The development and validation of the SEWFL scale significantly advance research and practice in peer assessment and SFL in English academic writing. Peer assessment is a widely used strategy in academic writing, but its effectiveness depends on students’ ability to give and receive feedback (Ketonen et al., 2020). The SEWFL scale provides a measurable framework to study how SEWFL influences peer assessment quality, such as whether higher SEWFL leads to more constructive feedback or better revision outcomes. It also enables researchers to explore the relationship between SEWFL and factors like writing motivation, self-regulation, and proficiency, as well as conduct cross-cultural and longitudinal studies. For educators and students, the SEWFL scale serves as a diagnostic tool to identify deficiencies in SEWFL. For example, this study found that participants’ ability to give feedback was the weakest, suggesting a need for targeted interventions. Students can also use the scale for self-assessment, fostering a deeper understanding of their SEWFL and promoting continuous improvement in academic writing.
However, the scale has limitations. Designed for the Cambodian context, its applicability to other cultural settings requires careful adaptation. Additionally, it may not fully capture the theoretical complexity of SEWFL, such as students’ knowledge of academic writing. Moreover, the scale measures perceived feedback competencies rather than actual behaviors. Future studies could develop behavioral observation schemes to complement self-reported data. Finally, although this study looked at how the SEWFL scale relates to different writing goals, it did not examine how SEWFL connects to actual writing processes or results, like feedback quality, revisions, or scores. Future research should explore these areas to better validate the SEWFL scale and assess its usefulness in predicting and improving specific writing behaviors and outcomes.
In summary, the SEWFL scale underscores the importance of discipline and context in framing SFL. It provides a foundation for advancing empirical research on SEWFL and informing evidence-based interventions in peer assessment for English academic writing.
Authors’ contributions
Ying Zhan: Conceptualisation, Methodology, data analysis, writing original draft, review & editing; Zhi Hong Wan: Methodology, data analysis, review & editing; Nangsamith Each: data collection and analysis.
Funding
Not applicable.
Data availability
No datasets were generated or analysed during the current study.
Declarations
Ethics approval and consent to participate
This study was conducted in accordance with the Declaration of Helsinki and approved by the Human Research Ethics Committee, The Education University of Hong Kong (No.2023–2024-0560). All participants gave informed consent to participate in this study.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Abbreviations
Appreciation for giving feedback
Appreciation for receiving feedback
Commitment to change
Confirmatory factor analysis
English as a foreign language
Enacting
Eliciting
Giving
Processing
Readiness to engage
Student English writing feedback literacy
Student feedback literacy
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Berkvens, JB. The importance of understanding culture when improving education: Learning from Cambodia. International Education Studies; 2017; 10,
Brown, GT. Measuring attitude with positively packed self-report ratings: Comparison of agreement and frequency scales. Psychological Reports; 2004; 94,
Carless, D; Boud, D. The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education; 2018; 43,
Chen, H; Tang, S; Zhang, S; Xu, J; Wang, G. Development and validation of the high school students’ mathematics discourse feedback skills scale (MDFSS). Current Psychology; 2024; 43,
Cheung, GW; Rensvold, RB. Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling; 2002; 9,
Conijn, R; Speltz, ED; Zaanen, MV; Waes, LV; Chukharev-Hudilainen, E. A product-and process-oriented tagset for revisions in writing. Written Communication; 2022; 39, pp. 97-128. [DOI: https://dx.doi.org/10.1177/07410883211052104]
Dawson, P; Carless, D; Lee, PPW. Authentic feedback: Supporting learners to engage in disciplinary feedback practices. Assessment & Evaluation in Higher Education; 2021; 46,
Dawson, P; Yan, Z; Lipnevich, A; Tai, J; Boud, D; Mahoney, P. Measuring what learners do in feedback: The feedback literacy behaviour scale. Assessment & Evaluation in Higher Education; 2024; 49,
DeVellis, R. F., & Thorpe, C. T. (2021). Scale development: Theory and applications. Sage publications.
Dong, Z; Gao, Y; Schunn, CD. Assessing students’ peer feedback literacy in writing: Scale development and validation. Assessment & Evaluation in Higher Education; 2023; 48,
Etikan, I; Musa, SA; Alkassim, RS. Comparison of convenience sampling and purposive sampling. American Journal of Theoretical and Applied Statistics; 2016; 5,
Han, Y., & Xu, Y. (2020). The development of student feedback literacy: the influences of teacher feedback on peer feedback. Assessment & Evaluation in Higher Education, 45(5), 680-696.
Han, Y; Xu, Y. Student feedback literacy and engagement with feedback: A case study of Chinese undergraduate students. Teaching in Higher Education; 2021; 26,
Harris, K; Santangelo, T; Graham, S. Waters, HS; Schneider, W. Metacognition and strategies instruction in writing. Metacognition, strategy use, and instruction; 2010; Guilford: pp. 226-256.
Hyland, K., & Hyland, F. (2019). Contexts and issues in feedback on L2 writing. In K. Hyland & F. Hyland (Eds.), Feedback in second language writing: Contexts and issues (pp.1–22). Cambridge University Press. https://doi.org/10.1017/9781108635547.003
Ketonen, L; Nieminen, P; Hähkiöniemi, M. The development of secondary students’ feedback literacy: Peer assessment as an intervention. The Journal of Educational Research; 2020; 113,
Koskey, KL. Using the cognitive pretesting method to gain insight into participants’ experiences: An illustration and methodological reflection. International Journal of Qualitative Methods; 2016; 15,
Lee, I. Research into practice: Written corrective feedback. Language Teaching; 2013; 46,
Leenknecht, M; Hompus, P; van der Schaaf, M. Feedback seeking behaviour in higher education: The association with students’ goal orientation and deep learning approach. Assessment & Evaluation in Higher Education; 2019; 44,
Li, L; Liu, X; Zhou, Y. Give and take: A re-analysis of assessor and assessee's roles in technology-facilitated peer assessment. British Journal of Educational Technology; 2012; 43,
Ling, G; Elliot, N; Burstein, JC; McCaffrey, DF; MacArthur, CA; Holtzman, S. Writing motivation: A validation study of self-judgment and performance. Assessing Writing; 2021; 48, 100509. [DOI: https://dx.doi.org/10.1016/j.asw.2020.100509]
Malecka, B; Boud, D; Carless, D. Eliciting, processing and enacting feedback: Mechanisms for embedding student feedback literacy within the curriculum. Teaching in Higher Education; 2022; 27,
Marsh, H. W. (2002). A multidimensional physical self-concept: a construct validity approach to theory, measurement, and research. Psychology: The Journal of the Hellenic Psychological Society, 9, 459-493
Martin, AJ. Examining a multidimensional model of student motivation and engagement using a construct validation approach. British Journal of Educational Psychology; 2007; 77, pp. 413-440. [DOI: https://dx.doi.org/10.1348/000709906X118036]
Molloy, E; Boud, D; Henderson, M. Developing a learning-centred framework for feedback literacy. Assessment & Evaluation in Higher Education; 2020; 45,
Muchinsky, PM. The correction for attenuation. Educational and Psychological Measurement; 1996; 56,
Ngoun, S. (2013). Assessment practices in a Cambodian university: Through the lens of lecturers and students [Master’s thesis, The Victoria University of Wellington]. https://researcharchive.vuw.ac.nz/xmlui/bitstream/handle/10063/2786/thesis.pdf?sequence=2
Nieminen, JH; Carless, D. Feedback literacy: A critical review of an emerging concept. Higher Education; 2023; 85,
Sarré, C; Grosbois, M; Brudermann, C. Fostering accuracy in L2 writing: Impact of different types of corrective feedback in an experimental blended learning EFL course. Computer Assisted Language Learning; 2021; 34,
Song, BK. Bifactor modelling of the psychological constructs of learner feedback literacy: Conceptions of feedback, feedback trust and self-efficacy. Assessment & Evaluation in Higher Education; 2022; 47,
Sutton, P. Conceptualizing feedback literacy: Knowing, being, and acting. Innovations in Education and Teaching International; 2012; 49,
Teng, MF; Ma, M. Assessing metacognition-based student feedback literacy for academic writing. Assessing Writing; 2024; 59, 100811. [DOI: https://dx.doi.org/10.1016/j.asw.2024.100811]
Teng, L. S. (2022). Self-regulated learning and second language writing: Fostering strategic language learners. Springer Nature.
Van Beuningen, C. Corrective feedback in L2 writing: Theoretical perspectives, empirical insights, and future directions. International Journal of English Studies; 2010; 10,
Van Popta, E; Kral, M; Camp, G; Martens, RL; Simons, PRJ. Exploring the value of peer feedback in online learning for the provider. Educational Research Review; 2017; 20, pp. 24-34. [DOI: https://dx.doi.org/10.1016/j.edurev.2016.10.003]
Van der Kleij, FM; Adie, LE; Cumming, JJ. A meta-review of the student role in feedback. International Journal of Educational Research; 2019; 98, pp. 303-323. [DOI: https://dx.doi.org/10.1016/j.ijer.2019.09.005]
Weidlich, J., Jivet, I., Woitt, S., Orhan Göksün, D., Kraus, J., & Drachsler, H. (2025). The student feedback literacy instrument (SFLI): multilingual validation and introduction of a short-form version. Assessment & Evaluation in Higher Education, 1–17. https://doi.org/10.1080/02602938.2025.2451729
Winstone, NE; Hepper, EG; Nash, RA. Individual differences in self-reported use of assessment feedback: The mediating role of feedback beliefs. Educational Psychology; 2021; 41,
Winstone, NE; Balloo, K; Carless, D. Discipline-specific feedback literacies: A framework for curriculum design. Higher Education; 2022; 83,
Woitt, S; Weidlich, J; Jivet, I; OrhanGöksün, D; Drachsler, H; Kalz, M. Students’ feedback literacy in higher education: An initial scale validation study. Teaching in Higher Education; 2023; 30,
Yildiz, H; Bozpolat, E; Hazar, E. Feedback literacy scale: A study of validation and reliability. International Journal of Eurasian Education and Culture; 2022; 7,
Yu, S; Lee, I. Peer feedback in second language writing (2005–2014). Language Teaching; 2016; 49, pp. 461-493. [DOI: https://dx.doi.org/10.1017/S0261444816000161]
Yu, S; Liu, C. Improving student feedback literacy in academic writing: An evidence-based framework. Assessing Writing; 2021; 48, 100525. [DOI: https://dx.doi.org/10.1016/j.asw.2021.100525]
Yu, S; Zhang, DE; Liu, C. Assessing L2 student writing feedback literacy: A scale development and validation study. Assessing Writing; 2022; 53, 100643. [DOI: https://dx.doi.org/10.1016/j.asw.2022.100643]
Zhan, Y. (2019). Conventional or sustainable? Chinese university students’ thinking about feedback used in their English lessons. Assessment & Evaluation in Higher Education, 44(7), 973–86. https://doi.org/10.1080/02602938.2018.1557105
Zhan, Y. (2022). Developing and validating a student feedback literacy scale. Assessment & Evaluation in Higher Education, 47(7), 1087-1100. https://doi.org/10.1080/02602938.2021.2001430
Zhan, Y. (2023). What do college students think of feedback literacy? An ecological interpretation of Hong Kong students’ perspectives. Assessment & Evaluation in Higher Education, 48(5), 686–700. https://doi.org/10.1080/02602938.2022.2121380
Zhan, Y. (2024). Are they ready? An investigation of university students’ difficulties in peer assessment from dual perspectives. Teaching in Higher Education, 29(4), 823-840. https://doi.org/10.1080/13562517.2021.2021393
Zhan, Y., Wan, Z. H., & Khon, M. (2025). What predicts undergraduates’ student feedback literacy? Impacts of epistemic beliefs and mediation of critical thinking. Teaching in Higher Education, 30(4), 843-861. https://doi.org/10.1080/13562517.2023.2280268
Zhang, T; Mao, Z. Exploring the development of student feedback literacy in the second language writing classroom. Assessing Writing; 2023; 55, 100697. [DOI: https://dx.doi.org/10.1016/j.asw.2023.100697]
Zhang, ED; Zhou, N; Yu, S. Assessing L2 secondary student writing feedback literacy and its predictive effect on their L2 writing performance. Language Teaching Research; 2023; [DOI: https://dx.doi.org/10.1177/13621688231217665]
Zhang, S; Xu, J; Chen, H; Jiang, L; Yi, X. Influence of teacher autonomy support in feedback on high school students’ feedback literacy: The multiple mediating effects of basic psychological needs and intrinsic motivation. Frontiers in Psychology; 2024; 15, 1411082. [DOI: https://dx.doi.org/10.3389/fpsyg.2024.1411082]
Zhang, T; Mao, Z. Exploring the development of student feedback literacy in the second language writing classroom. Assessing Writing; 2023; 55, 100697. [DOI: https://dx.doi.org/10.1016/j.asw.2023.100697]
Zheng, Y; Yu, S. Student engagement with teacher written corrective feedback in EFL writing: A case study of Chinese lower-proficiency students. Assessing Writing; 2018; 37, pp. 13-24. [DOI: https://dx.doi.org/10.1016/j.asw.2018.03.001]
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.