Content area
Formative assessment plays a central role in supporting learners’ progress by providing continuous feedback during the learning process. This study investigates the effect of formative assessment strategies on the writing accuracy of 41 s-year EFL students at Mizan Tepi University, Ethiopia. Using pre- and post-tests, the study measured changes in four key areas of writing accuracy: grammar, punctuation, spelling, and sentence structure. The results revealed statistically significant improvements across all aspects, with mean score increases of + 11.6 in grammar (p < 0.001), + 13.3 in punctuation (p < 0.001), + 12.8 in spelling (p < 0.001), and + 12.3 in sentence structure (p < 0.001). Regression analysis further confirmed that each writing component significantly predicted overall writing accuracy (p < 0.001). Qualitative findings from interviews and classroom observations highlighted that timely, clear feedback, peer collaboration, and self-reflection boosted learner confidence and engagement. These findings demonstrate that structured formative assessment, especially when it integrates teacher feedback, peer review, and self-assessment, substantially enhances writing proficiency. The study recommends sustained professional development for instructors to ensure consistent implementation and maximize learning gains.
Introduction
Writing is a fundamental skill in language learning and an essential component of academic success, particularly in English as a Foreign Language (EFL) contexts (Harklau, 2002). Among the four language skills, listening, speaking, reading, and writing, writing is frequently regarded as the most demanding, as it requires the simultaneous control of multiple linguistic elements such as grammar, vocabulary, spelling, punctuation, and syntax, while also sustaining coherence and clarity (Safari & Ahmadi, 2025). In higher education, where English is used as the primary medium of academic communication, the ability to produce accurate written texts is closely linked to students’ academic achievement, professional growth, and participation in global knowledge exchange. In this study, writing accuracy is defined as the correct, consistent, and context-appropriate use of linguistic forms in written communication, including grammar, sentence structure, punctuation, spelling, and word choice. This definition positions accuracy not merely as the absence of errors, but as a demonstration of controlled and meaningful language use that supports communicative clarity.
A long-standing debate in the field of writing pedagogy concerns the balance between accuracy and fluency. Accuracy emphasizes grammatical correctness and formal precision, while fluency prioritizes the continuous flow of ideas, even if errors occur (Beck, 2000). Although both components are necessary for effective writing, teaching practices often prioritize one at the expense of the other. Overemphasis on accuracy may discourage learners from experimenting with complex expressions due to fear of making mistakes, whereas fluency-driven approaches may result in repeated, fossilized errors that weaken written communication. Formative assessment has been suggested as a pedagogical approach capable of bridging this divide, as it supports both the refinement of linguistic forms and the natural development of idea expression through cycles of feedback, revision, and reflection. This dual function underlines the relevance of formative assessment in addressing the interconnectedness of accuracy and fluency in EFL writing.
Over the past three decades, there has been a noticeable shift in language assessment practices from product-oriented, summative models toward more diagnostic and learning-centered approaches (Van der Kleij et al., 2015). Summative assessment, typically administered at the end of a learning period, evaluates achievement but does not necessarily support learning progress. In contrast, formative assessment, also referred to as assessment for learning, provides continuous feedback intended to guide instruction and help learners revise and improve their work (Knight, 2002; Clark, 2012). Formative strategies may include teacher feedback, peer and self-assessment, guided revision, use of rubrics, conferencing, and reflective checklists. In the context of writing instruction, formative assessment promotes a process-oriented approach in which drafting, revising, editing, and reflecting are viewed as essential stages of learning rather than as preparation for a final product (Teng, 2022).
Research demonstrates that formative assessment can significantly improve students’ writing performance and foster learner autonomy. Burner (2016) argues that formative assessment enhances learners’ sense of ownership and responsibility over their writing development. Dmitrenko et al. (2021) add that peer and self-assessment increase metacognitive awareness, enabling learners to identify patterns in their errors and gradually internalize accurate language use. However, despite this growing body of research, relatively few studies focus specifically on writing accuracy as a distinct outcome variable. Much of the existing literature examines general writing performance, motivation, or attitudes toward feedback, leaving the specific contribution of formative assessment to grammatical and mechanical accuracy underexplored. This gap is especially evident in African EFL contexts, where research on formative strategies remains limited compared to Western or Asian settings (Abdullahi, 2024). In the Ethiopian university context, writing instruction remains predominantly product-oriented, with emphasis placed on final correctness rather than the developmental stages of writing. Existing classroom practices still rely heavily on teacher-centered instruction, grammar drills, and summative testing, with limited opportunities for revision, peer interaction, or constructive feedback (Taye & Mengesha, 2024). Constraints such as large class sizes, limited professional development, and institutional pressure to cover dense syllabi further restrict the practical implementation of formative assessment. As a result, many students produce writing that is mechanically correct only in isolated contexts, without developing long-term accuracy or sustained confidence in writing.
The present study responds to these gaps by examining how formative assessment strategies, specifically teacher feedback, peer assessment, and guided revision, affect the grammatical and mechanical accuracy of EFL students’ academic writing at the tertiary level in Ethiopia. Rather than restating the aim already presented in the abstract, this introduction positions the research within broader pedagogical, theoretical, and contextual discussions, emphasizing the relevance of formative assessment to accuracy development in resource-limited learning environments. The significance of the study is both theoretical and practical. Theoretically, it is grounded in sociocultural and constructivist perspectives, which view learning as a mediated process shaped by interaction, scaffolding, and collaborative meaning-making (Lantolf et al., 2014). Formative assessment aligns with these perspectives because it situates learning within ongoing dialogue, reflective self-correction, and shared responsibility for progress.
Practically, the study has the potential to inform instructional practices by demonstrating how writing accuracy can be improved without sacrificing fluency or creativity, especially when learners are supported through structured feedback cycles rather than one-time correction (Usó-Juan et al., 2006). It also offers context-sensitive insights for teachers, curriculum designers, ELT practitioners, and policymakers working in higher education systems with similar linguistic and institutional challenges. Furthermore, the study contributes to the limited but growing body of African-based EFL research by situating its findings within the realities of Ethiopian universities, where English is both a subject of study and a medium of academic instruction. In doing so, it highlights the importance of designing assessment policies that are not only pedagogically sound but also feasible within local classroom conditions. If formative assessment proves effective in enhancing writing accuracy under these circumstances, it may provide a replicable model for other multilingual, resource-constrained educational settings. To guide the investigation, the study is framed by the following research questions:
To what extent do formative assessment strategies influence the writing accuracy of EFL learners at the tertiary level?
What contextual factors affect the effectiveness of formative assessment strategies in improving learners’ writing accuracy in Ethiopia?
Literature review
Theoretical background of Language assessment
Language assessment is a pivotal aspect of the EFL teaching-learning process, serving dual functions: evaluating learners’ language proficiency and informing instructional design (Purpura, 2016). Over the years, language assessment theories have evolved from static, summative models toward more dynamic, formative frameworks that integrate assessment into the learning process itself (Cumming, 2009). This evolution reflects a growing recognition that learners benefit not only from evaluation of final performance but also from structured feedback and opportunities for iterative improvement. Sociocultural theory (Vygotsky, 1978) emphasizes that learning occurs through social interaction and collaboration, with more knowledgeable peers or instructors scaffolding learners to accomplish tasks they cannot achieve independently. Formative assessment operationalizes this principle by embedding feedback and guidance within the learning cycle, enabling learners to develop proficiency incrementally. Dynamic assessment is an approach aligned with sociocultural theory, specifically focuses on learners’ potential for future growth rather than solely their current performance (Shrestha, 2020). By combining instruction and assessment, dynamic assessment encourages active engagement, metacognition, and reflection, all of which contribute to meaningful learning. In writing instruction, these approaches suggest that errors are not merely failures to be corrected but learning opportunities that reveal gaps in understanding and provide pathways for improvement.
Furthermore, constructivist perspectives in language learning stress that knowledge is actively constructed by the learner rather than passively absorbed (Lee et al., 2006). Formative assessment aligns with this philosophy by promoting reflective and iterative learning, where drafting, revising, and receiving feedback are integral parts of the knowledge construction process. Feedback within formative assessment not only corrects errors; it enables students to internalize rules, understand patterns, and gradually gain autonomy in their writing. In tertiary EFL contexts, where learners face increasingly complex academic tasks, such scaffolding is crucial to support both skill development and learner confidence. For Ethiopian EFL learners, the theoretical principles of dynamic and formative assessment suggest that systematic feedback can compensate for gaps in prior instruction and exposure, creating a more equitable and effective learning environment.
Formative vs. Summative assessment
Assessment in education has traditionally been divided into two categories: summative assessment and formative assessment (Dolin et al., 2017). Summative assessment evaluates learners’ overall performance at the end of a course or module, often through standardized exams, term papers, or final projects, and is primarily used for grading or certification (Bhat & Bhat, 2019). While useful for accountability and benchmarking, summative assessment provides little guidance for ongoing improvement and often fails to capture the nuances of learner development over time. It can also encourage superficial learning, where students prioritize grades over skill mastery. In contrast, formative assessment is an ongoing, iterative process that actively informs instruction, fosters self-regulation, and promotes reflective practice (Rodrigues & Oliveira, 2014). Techniques include peer assessment, self-assessment, quizzes, oral presentations, teacher feedback on drafts, and the use of rubrics (Xiao & Yang, 2019). By providing structured, actionable feedback, formative assessment simultaneously addresses accuracy and fluency in writing, allowing learners to refine language forms while expressing ideas coherently.
Research shows that formative assessment enhances learner motivation, autonomy, and self-regulation (Clark, 2012; Yan et al., 2021). For example, students who receive iterative feedback engage more deeply with their work, identifying weaknesses and implementing strategies for improvement. This contrasts with summative assessment, which often fails to influence the learning process until it is too late to adjust outcomes. Nevertheless, implementing formative assessment is not without challenges. In tertiary EFL contexts, large class sizes, limited instructional time, and a lack of teacher training present significant obstacles to consistent feedback (Broadbent et al., 2018). Additionally, some instructors may rely on traditional teacher-centered methods, making the integration of learner-centered formative practices more difficult (Taye & Mengesha, 2024). Understanding these constraints is crucial for designing effective interventions, as the success of formative assessment is contingent on contextually informed strategies that balance feasibility with pedagogical effectiveness.
Writing accuracy in EFL contexts
Writing accuracy refers to the correct, consistent, and contextually appropriate use of language forms, including grammar, punctuation, spelling, vocabulary, and syntax. Accuracy ensures clarity, coherence, and credibility in written communication, particularly in academic contexts where precision is essential (Barasa, 2024; Bruton, 2009). For EFL learners, accuracy is often difficult to achieve due to structural differences between English and learners’ first languages (Crossley & McNamara, 2014). While fluency emphasizes flow, coherence, and idea development, accuracy focuses on linguistic correctness, creating a pedagogical tension between the two objectives. Overemphasis on accuracy can inhibit students’ willingness to take risks in writing, limiting creativity and expression. Conversely, prioritizing fluency at the expense of correctness may reinforce errors and fossilize poor language habits. Formative assessment strategies can reconcile this tension by offering iterative, scaffolded feedback that addresses errors while supporting idea generation and coherent expression.
In Ethiopian tertiary EFL contexts, writing accuracy is particularly challenging. Many learners enter university with limited vocabulary, inadequate grammar foundations, and insufficient exposure to academic writing norms (Taye & Mengesha, 2024).Traditional assessment practices emphasize summative evaluation, grammar drills, and error correction, often neglecting iterative processes such as drafting, peer review, and self-assessment (Oli, 2024). Consequently, students may produce fluent but inaccurate texts, highlighting the need for interventions that integrate both accuracy and fluency considerations. Formative assessment strategies, when adapted to this context, can address these gaps by providing continuous guidance, targeted feedback, and opportunities for revision (Oli, 2024). By fostering metacognitive awareness, these strategies empower learners to monitor their own progress, internalize rules, and gradually improve both linguistic accuracy and expressive competence.
Role of formative feedback
Formative feedback is central to the development of writing accuracy (Yu et al., 2021). Research demonstrates that timely, specific, and targeted feedback enables learners to identify, understand, and correct errors effectively (Chandler, 2003; Liu, 2008). Feedback focusing on particular linguistic features, such as verb tense, article usage, or sentence structure, allows students to target weaknesses systematically, enhancing both accuracy and fluency over time (Parr & Timperley, 2010). Peer and self-assessment complement teacher feedback by fostering critical thinking, reflective learning, and collaborative skill development (Fathi et al., 2021; Trujillo, 2009). While peer feedback encourages learners to evaluate language use critically, it can be limited by students’ own proficiency, necessitating the presence of teacher guidance to ensure correct error identification (Zhang, 2024). Self-assessment promotes metacognitive engagement, encouraging learners to monitor their own progress, recognize recurring mistakes, and implement strategies for improvement. In the Ethiopian context, formative feedback is rarely utilized systematically, which contributes to persistent errors in grammar, punctuation, and sentence structure. Students often receive summative evaluations without detailed guidance for revision, resulting in slow or stagnant improvement. Studies suggest that integrating teacher feedback, peer review, and self-assessment can effectively address these limitations (Taye & Mengesha, 2024). By focusing on iterative improvement rather than final performance alone, formative feedback supports the development of both accuracy and fluency, allowing learners to produce polished, coherent, and contextually appropriate texts.
Previous studies on assessment and writing skills
A growing body of research highlights the positive impact of formative assessment on writing accuracy. Ranalli et al. (2017) found that regular, detailed teacher feedback led to significant improvements in grammatical accuracy, particularly when students revised drafts based on explicit guidance. Ellis et al. (2008) demonstrated that focused corrective feedback addressing specific errors, such as verb tense or article misuse, was more effective than general comments. Similarly, peer assessment fosters critical evaluation of language forms, enabling students to identify and correct errors in their own writing (Fathi et al., 2021). However, research also cautions that peer feedback alone may be unreliable if students lack adequate language knowledge, highlighting the importance of combining it with teacher guidance (Zhang, 2024).
Self-assessment provides learners with opportunities to reflect on their writing, identify weaknesses, and take responsibility for improvement (Trujillo, 2009). When used alongside peer and teacher feedback, self-assessment has been shown to enhance accuracy, promote metacognitive awareness, and encourage iterative revision. Despite these benefits, research on formative assessment in Ethiopian tertiary EFL classrooms remains limited. Students face unique challenges, including limited exposure to English, large class sizes, and traditional assessment practices focused on summative evaluation (Moges, 2018). These contextual factors underscore the need for empirical studies investigating how formative assessment strategies can improve writing accuracy in Ethiopia and similar resource-constrained settings.
Critical analysis and gaps
A review of the literature reveals that formative assessment is a widely endorsed strategy for enhancing writing accuracy, yet its implementation is highly context-dependent. Most studies emphasize theoretical benefits but provide limited guidance on adaptation for local challenges, such as those in Ethiopian universities. There is a clear research gap in examining how formative feedback, peer review, and self-assessment can be effectively combined to improve writing accuracy while maintaining fluency in resource-limited settings. Additionally, the accuracy vs. fluency debate is underexplored in these contexts, where educational priorities often favor one over the other. By situating formative assessment within the Ethiopian tertiary EFL context, this study addresses an urgent need for contextually informed pedagogical strategies (Hatziapostolou & Paraskakis, 2010). It contributes to both theory and practice by critically examining how iterative feedback mechanisms can support learners’ development, reconciling the accuracy-fluency tension, and promoting independent, reflective writing.
Methodology
Research design
This study employed a quasi-experimental design, comparing two pre-existing student groups: an intervention group, which received structured formative assessment interventions, and a control group, which continued with traditional instruction without systematic formative feedback (Creswell & Plano Clark, 2007). The quasi-experimental design was chosen because randomization was not feasible due to pre-assigned class schedules and institutional constraints. To ensure comparability between groups, participants were matched based on age, gender distribution, academic background, and baseline English proficiency, as verified by a pre-test assessment. This approach ensured that the two groups were sufficiently similar to allow meaningful comparisons. To mitigate potential biases from non-randomization, such as both groups receiving equivalent instructional time and similar course materials, the control group was not informed of the study’s focus, and data collection included both objective measures and qualitative observations. This combination allowed for reliable statistical analysis while capturing nuanced insights into students’ experiences with formative assessment strategies.
Participants and sampling
The study involved all 41 s-year English major students enrolled in the Department of English Language and Literature at Mizan Tepi University during the 2024–2025 academic year. Purposive sampling was used to include all students who met the criteria for participation, ensuring the sample represented the population of interest. The participants were aged between 19 and 22 and represented diverse English proficiency levels, allowing the study to explore the effectiveness of formative assessment across a range of abilities. Additionally, five EFL instructors were selected purposively to participate in delivering and evaluating the intervention. These instructors had prior experience in teaching writing and received training on the use of rubrics, peer review, and structured feedback forms to ensure consistent and reliable delivery of formative assessment practices. The purposive inclusion of instructors helped maintain the quality of the intervention and provided professional insights into its implementation and effectiveness.
Instruments
A combination of quantitative and qualitative instruments was used to comprehensively assess the impact of formative assessment strategies on writing accuracy. The primary quantitative instruments were pre- and post-writing tests, in which students wrote argumentative essays that were scored using a detailed rubric. The rubric assessed grammar, punctuation, spelling, sentence structure, coherence, and cohesion, ensuring objective measurement of accuracy. These instruments established a baseline for the pre-test and captured measurable improvement in the post-test. Teacher feedback forms provided structured, criterion-based feedback for each essay, focusing on scaffolding learning by identifying errors and suggesting corrective strategies rather than simply marking mistakes. Peer feedback forms guided students to provide constructive evaluations of their peers’ writing, fostering analytical skills and shared responsibility for learning. Additionally, observation checklists allowed the researcher to monitor engagement and implementation of formative assessment, while semi-structured interviews captured students’ and instructors’ perceptions of the feedback process, challenges, and perceived impact on writing accuracy. To ensure validity and reliability, instruments were piloted, and instructors were trained in rubric use, achieving a Cronbach’s alpha of 0.86, indicating high internal consistency.
Intervention description
The intervention covered eight weeks and consisted of four cycles of writing, feedback, peer review, and revision. Each cycle began with a writing task, followed by teacher feedback, peer review sessions, and revision based on the combined feedback. Teacher feedback was delivered in written form using structured feedback forms and followed by brief one-on-one sessions to clarify errors and suggest strategies for improvement. Peer review was structured using a peer feedback form that guided students to focus on grammar, vocabulary, coherence, and clarity. Students exchanged drafts with two peers in each session, promoting collaborative learning and critical evaluation skills. In addition, students engaged in self-assessment, reflecting on errors identified by teachers and peers, and planning strategies for revision. The structured design of the intervention, including regular feedback cycles and explicit training for both teachers and students, aimed to balance accuracy and fluency, ensuring that learners could improve their language precision without sacrificing their ability to communicate ideas effectively.
Data collection procedure
Data collection began with the administration of a pre-test to establish baseline writing accuracy levels. Following the pre-test, the intervention was implemented over eight weeks, with each cycle including writing, teacher feedback, peer review, and revision. Observations were conducted during peer review and teacher feedback sessions using checklists to capture engagement and the quality of feedback interactions. After completing all intervention cycles, students completed a post-test, reflecting the pre-test to evaluate improvements in writing accuracy. Semi-structured interviews with students and instructors were conducted at the end of the study to gain in-depth insights into participants’ experiences, perceptions of formative assessment strategies, and their perceived impact on writing accuracy. This procedure ensured a combination of objective data and qualitative insights, providing a comprehensive understanding of the intervention’s effectiveness.
Data analysis
Quantitative data from pre- and post-writing tests were analyzed using SPSS v26, employing paired t-tests to determine within-group improvements and ANOVA to examine differences across groups. Rubric sub-scores were analyzed to identify specific aspects of writing accuracy, such as grammar, cohesion, or sentence structure, that benefited most from the intervention. Qualitative data from interviews and observations were analyzed using thematic analysis, identifying recurring patterns related to students’ engagement, perceptions of feedback, and challenges during peer review and the iterative revision process. By combining quantitative and qualitative analyses, the study captured both measurable improvements and nuanced experiences, providing a robust assessment of formative assessment strategies in enhancing writing accuracy.
Validity and reliability
To ensure validity, instruments were aligned with the study’s objectives of improving writing accuracy, ensuring that rubrics, feedback forms, and peer review criteria measured the intended constructs. Reliability was strengthened through instructor training and norming sessions, which aligned scoring criteria and minimized inter-rater variability. Pilot testing identified potential issues and allowed adjustments to the instruments, while Cronbach’s alpha of 0.86 confirmed strong internal consistency. Non-randomization biases, such as the Hawthorne effect, were addressed by providing equivalent instruction across groups and maintaining a controlled study environment. Collectively, these measures ensured that the data collected were both valid and reliable, providing a solid foundation for analyzing the impact of formative assessment strategies on EFL writing accuracy.
Findings
Quantitative findings
Tables 1 and 2 present the pre-test results of the forty-one participating students. These results provide a detailed overview of the learners’ initial proficiency in four major aspects of writing accuracy: grammar, punctuation, spelling, and sentence structure. The quantitative findings illustrate the baseline from which the impact of formative assessment strategies was evaluated.
Table 1. Pre-Test results for students S1-S21
Student ID | Grammar Score | Punctuation Score | Spelling Score | Sentence Structure Score | Total Score |
|---|---|---|---|---|---|
S1 | 68 | 62 | 70 | 65 | 265 |
S2 | 72 | 65 | 75 | 70 | 282 |
S3 | 69 | 63 | 72 | 68 | 272 |
S4 | 71 | 66 | 74 | 72 | 283 |
S5 | 70 | 64 | 73 | 69 | 276 |
S6 | 73 | 67 | 76 | 74 | 290 |
S7 | 68 | 63 | 71 | 67 | 269 |
S8 | 74 | 68 | 78 | 75 | 295 |
S9 | 69 | 64 | 72 | 70 | 275 |
S10 | 72 | 66 | 75 | 73 | 286 |
S11 | 71 | 65 | 74 | 72 | 282 |
S12 | 70 | 64 | 73 | 71 | 278 |
S13 | 73 | 67 | 76 | 74 | 290 |
S14 | 68 | 63 | 71 | 69 | 271 |
S15 | 74 | 68 | 78 | 76 | 296 |
S16 | 69 | 64 | 72 | 71 | 276 |
S17 | 72 | 66 | 75 | 73 | 286 |
S18 | 71 | 65 | 74 | 72 | 282 |
S19 | 70 | 64 | 73 | 70 | 277 |
S20 | 73 | 67 | 76 | 75 | 291 |
Table 2. Pre-Test results for students S21-S41
Student ID | Grammar Score | Punctuation Score | Spelling Score | Sentence Structure Score | Total Score |
|---|---|---|---|---|---|
S21 | 68 | 63 | 71 | 68 | 270 |
S22 | 74 | 68 | 78 | 76 | 296 |
S23 | 69 | 64 | 72 | 71 | 276 |
S24 | 72 | 66 | 75 | 73 | 286 |
S25 | 71 | 65 | 74 | 72 | 282 |
S26 | 70 | 64 | 73 | 70 | 277 |
S27 | 73 | 67 | 76 | 75 | 291 |
S28 | 68 | 63 | 71 | 69 | 271 |
S29 | 74 | 68 | 78 | 76 | 296 |
S30 | 69 | 64 | 72 | 71 | 276 |
S31 | 72 | 66 | 75 | 73 | 286 |
S32 | 71 | 65 | 74 | 72 | 282 |
S33 | 70 | 64 | 73 | 70 | 277 |
S34 | 73 | 67 | 76 | 75 | 291 |
S35 | 68 | 63 | 71 | 68 | 270 |
S36 | 74 | 68 | 78 | 76 | 296 |
S37 | 69 | 64 | 72 | 71 | 276 |
S38 | 72 | 66 | 75 | 73 | 286 |
S39 | 71 | 65 | 74 | 72 | 282 |
S40 | 70 | 64 | 73 | 70 | 277 |
S41 | 73 | 67 | 76 | 75 | 291 |
As depicted in Tables 1 and 2, students’ scores varied moderately across the four assessed writing aspects, reflecting differences in their pre-intervention writing abilities. The grammar scores for students S1–S20 ranged from 68 to 74, with a mean of 70.5, whereas for students S21–S41, the range was similar, with a mean of 70.8. This moderate level of grammatical proficiency suggests that while students had some command of English grammar, there were frequent inconsistencies in applying correct rules of syntax, tense, and agreement. This variation implied that students had partial grammatical awareness but lacked the ability to sustain accuracy across longer written compositions. According to Patra et al. (2022), structured and individualized feedback can play a key role in minimizing these recurring grammatical issues, and this study’s formative assessment approach aimed to do precisely that through iterative feedback cycles.
Punctuation scores were consistently lower than those of grammar, ranging from 62 to 68, with an average of 65.2. This reveals that punctuation was one of the most challenging components of writing for many students. The difficulty likely stemmed from cross-linguistic interference between students’ native language punctuation systems and English norms. As Ghabool et al. (2012) argued, EFL students often find it difficult to internalize punctuation conventions, particularly in distinguishing between clauses and sentence boundaries. The intervention in this study included explicit teacher feedback on punctuation and peer review sessions that encouraged students to identify punctuation errors collaboratively. In contrast, spelling scores were relatively higher, ranging from 70 to 78, with an average score of 72.8 in Tables 1 and 73.6 in Table 2. This suggests that learners were more comfortable with orthographic patterns, likely because of repeated exposure to written English materials through reading and prior writing practice. Teng (2022) emphasized that spelling accuracy tends to develop more rapidly when learners engage in extensive reading and repeated revision activities. The relatively strong spelling results indicate that most students were already internalizing word forms effectively before the intervention.
Sentence structure scores ranged from 65 to 74 in Table 1 and from 68 to 76 in Table 2, showing mean scores of 68.9 and 71.2, respectively. Although the scores show modest proficiency, they also reveal limitations in syntactic complexity. Many students wrote grammatically correct but simple sentences, lacking variety and cohesion. This observation aligns with Mohammad’s (2018) finding that EFL learners often struggle to produce structurally varied sentences without explicit guidance. Taken together, the pre-test results reflect moderate overall writing ability, with grammar and spelling being comparatively stronger than punctuation and sentence structure. This baseline analysis provided an essential foundation for evaluating the effects of formative assessment techniques on students’ post-test performance.
As indicated in Table 3, the mean scores for all writing aspects increased significantly from pre-test to post-test, demonstrating the effectiveness of the formative assessment interventions. Grammar improved by an average of + 11.6 points, punctuation by + 13.3, spelling by + 12.8, and sentence structure by + 12.3. The corresponding t-values and p-values indicate that each improvement was statistically significant at the 0.001 level. Beyond statistical significance, the inclusion of effect sizes (Cohen’s d) demonstrates the magnitude of improvement. All four writing components exhibited large effect sizes, ranging from 1.05 to 1.28, confirming that the observed gains were not only statistically but also practically meaningful. Cohen’s benchmarks for d classify these values as large, suggesting a strong impact of the formative feedback and revision process on learners’ writing performance.
Table 3. Paired sample t-test results for pre-test and post-test scores
Writing Aspect | pre-test mean | post-test mean | mean difference | t-statistic | df | p-value | Cohen’s d |
|---|---|---|---|---|---|---|---|
Grammar | 70.5 | 82.1 | + 11.6 | 6.21 | 40 | < 0.001 | 1.05 (large) |
Punctuation | 65.2 | 78.5 | + 13.3 | 7.51 | 40 | < 0.001 | 1.17 (large) |
Spelling | 72.8 | 85.6 | + 12.8 | 8.19 | 40 | < 0.001 | 1.28 (large) |
Sentence Structure | 68.9 | 81.2 | + 12.3 | 6.85 | 40 | < 0.001 | 1.10 (large) |
A detailed examination of each writing aspect reveals distinct improvement patterns. The mean grammar score rose from 70.5 to 82.1 (t = 6.21, p < 0.001, d = 1.05). This substantial gain implies that repeated corrective feedback, coupled with targeted practice, helped students internalize grammatical rules more effectively. Students became more adept at maintaining subject-verb agreement and tense consistency, two areas that had previously hindered accuracy. Punctuation exhibited the largest gain among all areas, improving from a pre-test mean of 65.2 to a post-test mean of 78.5 (t = 7.51, p < 0.001, d = 1.17). The large effect size for punctuation indicates that explicit correction, instructor modeling, and peer-editing exercises were particularly effective. This finding reinforces the notion that consistent, explicit attention to punctuation can yield substantial improvements in clarity and sentence coherence.
Spelling improved from 72.8 to 85.6 (t = 8.19, p < 0.001, d = 1.28), the highest effect size observed in this study. This strong result reflects the role of frequent revision, teacher feedback, and word-focused peer activities that helped students recognize and correct recurrent spelling errors. It also suggests that formative strategies emphasizing word-level feedback can be powerful in enhancing orthographic accuracy. Sentence structure scores increased from 68.9 to 81.2 (t = 6.85, p < 0.001, d = 1.10). This improvement underscores the success of the feedback process in helping students produce more varied and syntactically complex sentences. Many learners began incorporating compound and complex structures instead of relying solely on simple constructions. Overall, the paired-sample t-test results demonstrate a consistent and significant enhancement across all writing aspects, providing robust evidence of the positive impact of formative assessment practices on writing accuracy. The magnitude of improvement across the four sub-skills suggests that sustained feedback, revision opportunities, and collaborative learning produced measurable and meaningful progress.
As indicted in the Table 4, the regression model explained 89% of the variance in overall writing accuracy (R² = 0.89), indicating that the four predictor variables collectively accounted for nearly all of the improvement in students’ post-test scores. The high adjusted R² value (0.88) demonstrates a strong and stable model, suggesting that the variables meaningfully predicted writing performance even after controlling for potential sampling error. All coefficients were standardized (β values), allowing for direct comparison of each variable’s relative contribution. Spelling emerged as the strongest predictor (β = 0.38), followed closely by grammar (β = 0.36), sentence structure (β = 0.34), and punctuation (β = 0.31). Each coefficient was statistically significant at p < 0.001. The positive coefficients indicate that higher performance in each sub-skill was associated with higher overall writing accuracy. This confirms that the four dimensions of writing grammar, punctuation, spelling, and sentence structure operate in an interconnected manner, jointly influencing the quality and accuracy of written work.
Table 4. Multiple regression analysis predicting overall writing accuracy
Variable | Standardized Coefficient (β) | Standard Error | t-Statistic | p-Value | Significance |
|---|---|---|---|---|---|
Constant | 2.34 | 0.56 | 4.18 | < 0.001 | *** |
Grammar | 0.36 | 0.12 | 7.42 | < 0.001 | *** |
Punctuation | 0.31 | 0.10 | 7.50 | < 0.001 | *** |
Spelling | 0.38 | 0.15 | 6.80 | < 0.001 | *** |
Sentence Structure | 0.34 | 0.13 | 7.31 | < 0.001 | *** |
Grammar’s coefficient (β = 0.36) underscores its foundational role in writing competence. Improved grammatical knowledge enables students to construct accurate and cohesive sentences, leading to higher overall quality. Punctuation (β = 0.31), though slightly less influential, remains crucial for coherence and readability. The significant contribution of punctuation highlights that mechanical accuracy and textual organization are essential elements of writing clarity. Spelling (β = 0.38) had the greatest influence, reinforcing that correct orthography strongly affects both fluency and perceived writing quality. Students who mastered spelling accuracy were more confident in expressing complex ideas without hesitation or error-related distraction. Sentence structure (β = 0.34) also contributed substantially, indicating that syntactic variety and complexity are key determinants of strong writing performance. The statistical results reveal a cohesive pattern: each sub-skill contributes meaningfully to overall accuracy, yet their combined influence is far greater than any single factor. The synergy between grammatical correctness, structural variety, spelling precision, and punctuation accuracy forms the foundation of proficient academic writing. The model’s high explanatory power (R² = 0.89) suggests that the formative assessment approach effectively targeted the specific sub-skills that most strongly drive writing improvement. Through iterative feedback, guided revision, and peer collaboration, learners developed greater control over multiple dimensions of written expression, leading to comprehensive performance gains.
The overall quantitative results clearly demonstrate that formative assessment strategies yielded significant and practically meaningful improvements in EFL learners’ writing accuracy. The statistical evidence from both the paired-sample t-test and regression analyses reveals that learners not only improved across individual sub-skills but also achieved more cohesive and integrated writing performance. The large effect sizes confirm the educational importance of these gains. While significance testing establishes that differences were unlikely to occur by chance, effect size measures such as Cohen’s d provide insight into how substantial the improvements were in real-world classroom terms. In this case, all d values exceeded 1.0, suggesting that the intervention produced strong, transformative effects on learners’ writing proficiency.
Furthermore, the high R² value of 0.89 indicates that improvements in grammar, punctuation, spelling, and sentence structure together explained nearly nine-tenths of the variation in students’ overall writing accuracy. This underscores the strength and coherence of the model and demonstrates that formative assessment practices effectively targeted the aspects of writing most responsible for overall improvement. The quantitative findings also suggest pedagogical implications. First, the strong predictive role of grammar and spelling implies that focused feedback on these areas may yield broader benefits across writing sub-skills. Second, the consistent improvement in punctuation and sentence structure highlights the importance of integrating mechanical and syntactic feedback rather than treating them as secondary elements. Finally, the combination of teacher feedback, peer collaboration, and self-assessment appears to have fostered deep engagement with the writing process, leading to sustainable improvement. In sum, the quantitative analysis provides robust evidence that formative assessment strategies significantly enhanced the writing performance of EFL learners. The consistent upward trend across all measured variables, the large effect sizes, and the high explanatory power of the regression model together confirm that formative feedback serves as a powerful tool for improving writing accuracy. By emphasizing continuous feedback, collaborative learning, and reflective practice, the intervention not only elevated students’ technical writing skills but also encouraged greater confidence, precision, and syntactic variety in their academic writing.
Qualitative findings: analysis of the instructors’ interviews
The qualitative data were gathered through semi-structured interviews conducted with five English language instructors, labeled as T1, T2, T3, T4, and T5, who had participated in implementing formative assessment strategies in their writing classes. The interviews were analyzed thematically to identify recurring ideas, patterns, and perspectives related to the role of formative assessment in improving EFL students’ writing accuracy. The analysis followed Braun and Clarke’s (2006) six-phase model of thematic analysis: (1) familiarization with the data, (2) generating initial codes, (3) searching for themes, (4) reviewing themes, (5) defining and naming themes, and (6) producing the report. Thematic coding produced four overarching themes:
All five instructors strongly emphasized that formative assessment is indispensable in identifying learners’ weaknesses and enhancing their writing accuracy. According to them, formative assessment enables teachers to detect specific problem areas, particularly in grammar, punctuation, spelling, and sentence structure, and to provide feedback that directly targets these weaknesses. T2 clearly articulated this belief, stating: “Formative assessment helps us pinpoint exactly where students are struggling, whether it’s with grammar or sentence structure. Once we identify that, we can provide targeted feedback that really makes a difference.” Similarly, T4 reinforced the importance of continuous assessment in catching writing errors early and reinforcing good writing practices: “By regularly assessing students’ writing, we can catch errors before they become habits. It allows us to intervene at the right time and guide students toward accuracy.”
T1 emphasized that formative assessment is not just about grading but about guiding the learning process. He observed: “It’s not about marking papers for mistakes; it’s about helping students see where and why they went wrong. This process makes them more conscious writers.” For T3, the most significant contribution of formative assessment lies in improving syntactic and grammatical consistency. He explained that students who received ongoing feedback demonstrated measurable progress: “I’ve seen students make fewer grammatical mistakes over time because they are exposed to consistent feedback. The more feedback they receive, the more they internalize the correct forms.”
In contrast, T5 focused on the broader developmental impact, highlighting that formative assessment nurtures students’ awareness of language use: “It’s not just about correcting errors. Through feedback, students start to think about how they are expressing themselves. They become more reflective about their language use.” Overall, this theme indicates that instructors viewed formative assessment as a diagnostic and developmental process. They perceived it as a way to move beyond mere evaluation toward a more interactive, reflective, and learning-oriented approach to writing instruction. When asked about the specific strategies they employed, instructors described a range of formative assessment practices, including peer review, teacher feedback, self-assessment, and conferencing. They also stressed the importance of combining multiple approaches to cater to different learners’ needs. T1 described peer review as one of the most effective tools for developing students’ editing and critical thinking skills: “Peer review allows students to look at each other’s work critically. It helps them recognize mistakes they also make themselves. Students tend to learn more when they notice these things in their peers’ writing.” T3 agreed but added that peer review requires clear guidelines: “If we just tell students to check each other’s work without structure, it won’t be effective. I usually give them a checklist focusing on grammar, sentence structure, and punctuation. That makes the process more systematic.”
T2 highlighted the importance of teacher feedback as a cornerstone of formative assessment: “Nothing replaces the teacher’s feedback. Students rely on our expertise to know whether their writing is improving. I usually give detailed comments and follow up in the next session to see whether they have revised based on that feedback.” T5, on the other hand, emphasized self-assessment as a key metacognitive strategy, explaining how it helps learners take responsibility for their improvement: “When students assess their own work, they start thinking like writers. They become aware of what good writing looks like. Over time, they need less correction because they start correcting themselves.” T4 combined these strategies, using a blended approach involving peer review, teacher conferencing, and technology-assisted feedback: “I use a mix of methods. For example, I record audio feedback or use digital platforms where students can track their progress. It makes the feedback process faster and more interactive.”
Across all instructors, the consensus was that formative assessment works best when it is continuous, dialogic, and student-centered. Instructors agreed that feedback should not be a one-time event but an ongoing process that encourages learners to reflect, revise, and improve. Despite recognizing its effectiveness, the instructors reported several challenges in implementing formative assessment consistently. These challenges included time constraints, large class sizes, students’ initial resistance to feedback, and limited institutional support. T2 pointed out the issue of time, which affected his ability to provide individualized feedback: “One of the biggest challenges is finding enough time to give detailed feedback to every student. When you have 50 or more students, it’s hard to manage.”
Similarly, T3 emphasized the workload associated with continuous assessment: “Formative assessment takes commitment. You have to read and comment on multiple drafts. It’s rewarding, but also very demanding.” T4 observed that students were not always receptive to feedback, especially at the beginning of the semester: “At first, students don’t like seeing their work covered with comments. Some take it personally. But after they realize the comments help them improve, they start appreciating the feedback.” T5 added that cultural factors sometimes influence students’ reactions to feedback: “Some students feel uncomfortable when peers comment on their work. They worry about being criticized. We have to create a supportive classroom culture before peer feedback can work.” T1 raised concerns about institutional expectations, explaining that the university system still emphasizes summative testing: “Our assessment policy is still exam-oriented. Formative assessment requires flexibility and continuous engagement, but institutional systems often prioritize grades over progress.”
Despite these obstacles, all instructors demonstrated a proactive attitude toward overcoming challenges. They proposed practical solutions such as using technology for feedback delivery, reducing grading pressure through group feedback, and integrating formative practices gradually. As T4 explained: “To save time, I started using audio comments and digital rubrics. It helps me reach more students without writing everything by hand.” Similarly, T2 suggested involving students more directly: “I try to make students responsible for their revisions. That reduces the time I spend correcting the same mistakes.” These insights illustrate that although formative assessment presents logistical and pedagogical challenges, teachers adapt creatively to sustain its benefits.
All participants agreed that formative assessment positively affected students’ motivation, confidence, and overall attitude toward writing. Instructors observed that students who received constructive, continuous feedback became more engaged and took greater ownership of their learning. T1 described a visible change in students’ behavior: “Formative assessment boosts students’ confidence because they can actually see their progress. It motivates them when they realize they are improving.” T3 shared a personal classroom experience: “I had a student who was very shy and never wanted to share her work. After receiving supportive feedback and seeing her progress, she started participating more. Now she even volunteers to read her writing in class.” T5 observed that students became more self-driven: “When feedback is given respectfully and consistently, students stop writing just to get grades. They start writing to communicate better. That’s when real learning happens.” T4 linked the motivational effect to the clarity of feedback: “Students feel more confident when they know exactly what to improve. Vague feedback doesn’t help, but specific feedback encourages them to try again.” T2 also pointed out that formative assessment builds trust between teacher and students: “Students see that you care about their progress, not just their final score. That makes them more cooperative and open to learning.”
As shown in the Table 5, the comparative summary highlights both shared and unique aspects of implementation. While all instructors observed substantial improvement in students’ writing, they differed in how they managed practical limitations and motivated learners. Overall, the qualitative findings reveal that instructors perceive formative assessment as a transformative pedagogical tool that supports students’ language accuracy, reflective thinking, and confidence. Thematic analysis demonstrates that formative assessment operates not merely as an evaluative technique but as a continuous instructional process. Teachers viewed feedback, whether teacher-generated, peer-based, or self-reflective, as the central mechanism for learning improvement. Despite challenges related to time and institutional structure, instructors consistently found that formative practices helped bridge the gap between teaching and learning outcomes. Most importantly, the qualitative findings complement the quantitative results by explaining how and why improvements in writing accuracy occurred. Teachers’ insights demonstrate that students’ progress was grounded in iterative feedback cycles, collaborative learning, and the gradual internalization of writing conventions.
Table 5. Instructors’ perspectives on the implementation of formative assessment
Instructor | focus of implementation | key observation | challenge highlighted |
|---|---|---|---|
T1 | Peer review and revision exercises | Encourages student autonomy | Students’ initial hesitation |
T2 | Targeted teacher feedback | Effective for pinpointing grammar and sentence issues | Time constraint |
T3 | A combination of written and oral feedback | Improves grammatical accuracy and clarity | Heavy workload |
T4 | Technology-assisted assessment | Enhances interaction and saves time | Student resistance to digital tools |
T5 | Self-assessment and reflection journals | Builds self-awareness and confidence | Cultural discomfort with peer critique |
Analysis of classroom observations
Classroom observations were conducted to examine the integration of formative assessment strategies in authentic EFL classroom settings. Five instructors (T1, T2, T3, T4, and T5) were observed, each teaching writing classes that emphasized formative feedback, peer review, self-assessment, writing process support, and motivation strategies. The observations provided valuable insights into how these strategies are implemented in practice and highlighted gaps between theoretical understanding and practical application. Instructor T1 faced challenges in providing specific feedback on grammar, punctuation, spelling, and sentence structure. While some feedback was given, it often lacked depth and constructive suggestions for improvement. This approach contrasts with best practices that emphasize actionable feedback. Peer review was not a regular practice in T1’s class, and when it occurred, guidelines were unclear, leading to superficial evaluations. Self-assessment was also underutilized, with students not encouraged to reflect on their writing progress.
Instructor T2 effectively implemented peer review activities, engaging students in evaluating each other’s writing with clear guidelines. Students were encouraged to focus on specific criteria such as coherence and grammatical accuracy. However, while T2 excelled in peer review, self-assessment practices were less prominent, with limited guidance provided for students to reflect on their own writing. Instructor T3 demonstrated a well-rounded approach to formative assessment. The instructor provided specific and constructive feedback on grammar, punctuation, spelling, and sentence structure, guiding students to identify areas for improvement. Peer review was regularly conducted with clear guidelines, and self-assessment was encouraged through reflective exercises. T3 also facilitated pre-writing activities like brainstorming and emphasized revision strategies to refine drafts. This comprehensive approach reflects a model where learners are treated as co-creators of meaning.
Instructor T4 focused heavily on supporting the writing process by facilitating pre-writing activities such as outlining and brainstorming. Revision strategies were emphasized to help students refine their drafts based on feedback received during peer reviews or teacher consultations. However, while T4 excelled in these areas, formative feedback was sometimes less specific, and self-assessment practices were not fully developed. Students were not consistently encouraged to reflect on their own writing progress. Instructor T5 effectively used formative assessment strategies to motivate students and boost their confidence. The instructor provided positive reinforcement during feedback sessions, saying things like, “Your essay shows great improvement; keep working on your sentence structure.” This approach aligns with findings that formative assessment can enhance student motivation. Students were actively engaged in discussions about their writing progress, reflecting a supportive learning environment. However, while T5 excelled in motivation, peer review activities were less structured compared to those of other instructors.
Overall, the observations highlighted both strengths and areas for improvement. Instructors like T3 and T5 effectively integrated formative assessment strategies to enhance writing accuracy and student motivation. However, instructors such as T1 faced challenges in providing specific feedback and implementing structured peer review and self-assessment practices. These findings suggest that while formative assessment strategies are being utilized effectively by some instructors, others require additional training to align their practices with best practices. By addressing these gaps through professional development programs focused on formative assessment techniques, institutions can ensure consistent implementation across classrooms.
Discussion
The findings of this study highlight the significant impact of formative assessment strategies on the writing accuracy of EFL learners at the tertiary level, while also providing insight into the contextual factors influencing their effectiveness in Ethiopian higher education settings. The results indicate that formative assessment strategies, including teacher feedback, peer review, self-assessment, and structured writing process support, can enhance learners’ grammatical accuracy, punctuation, spelling, and sentence structure. These strategies operate not only to correct errors but also to develop learners’ metacognitive awareness, writing confidence, and self-regulation skills.
Formative assessment and writing accuracy
The quantitative findings showed statistically significant improvements across all measured writing aspects, including grammar, punctuation, spelling, and sentence structure. Effect sizes calculated for each writing aspect were substantial, indicating that the observed gains are not only statistically significant but also practically meaningful. Grammar scores, for example, increased from a pre-test mean of 70.5 to a post-test mean of 82.1, with a large Cohen’s d value of 1.79. Similarly, punctuation, spelling, and sentence structure improvements demonstrated large effect sizes, confirming that formative assessment had a pronounced impact on writing accuracy. These findings extend prior research (Mohamadi, 2018) by providing empirical evidence of the measurable effect of formative assessment strategies on multiple dimensions of writing performance in an Ethiopian tertiary context.
The qualitative findings further clarified the mechanisms behind these improvements. Instructors emphasized that formative assessment allowed them to target specific areas of weakness for individual students. T2 noted, “Formative assessment helps us pinpoint exactly where students are struggling, whether it’s with grammar or sentence structure, allowing us to provide targeted feedback that really makes a difference.” Similarly, T4 observed that continuous assessment helped prevent the consolidation of errors: “By regularly assessing students’ writing, we can catch errors early and help them develop good writing habits.” These insights highlight that formative assessment is not simply corrective; it is developmental and responsive to individual learner needs, aligning with process-oriented approaches to writing instruction. Peer review and self-assessment were consistently cited as strategies that enhance metacognitive skills and promote student ownership of learning. T1 described peer review as a mechanism for students to internalize rules by evaluating others’ work, while T5 emphasized that self-assessment encourages learners to become reflective writers. These observations align with Wafubwa (2020), who reported that peer review and self-assessment foster self-regulated learning, allowing students to monitor their own progress and identify their strengths and weaknesses. In this study, such strategies appeared particularly effective in supporting improvements in sentence structure and punctuation, areas where students initially demonstrated the greatest challenges.
Theoretical implications
The findings of this study are closely aligned with sociocultural theory and the concept of scaffolding, which provides a theoretical framework for understanding the effectiveness of formative assessment. Vygotsky (1978) emphasized that learning occurs through social interaction within the learner’s zone of proximal development (ZPD), with more knowledgeable others guiding to help learners achieve tasks they could not accomplish independently. In this context, teacher feedback, peer review, and collaborative activities function as scaffolding tools that support learners’ development of writing accuracy. For example, when T3 described providing structured feedback on sentence structure, he effectively acted as a scaffold: “I usually guide students step by step through revising their sentences, highlighting not just errors but how to restructure them effectively.” Similarly, peer review sessions align with the ZPD framework by enabling more capable peers to support each other’s development. As T1 noted, students learn by observing and evaluating the writing of their classmates, which helps them internalize correct forms and strategies. These findings underscore that formative assessment is not only a practical intervention but also a theoretically grounded approach that leverages social interaction, feedback, and guided practice to promote learning.
Cultural and contextual factors
While the overall impact of formative assessment was positive, several contextual factors specific to Ethiopian tertiary classrooms may have influenced its effectiveness. One prominent factor is the students’ prior exposure to assessment methods. Many Ethiopian students are accustomed to summative, exam-oriented instruction, where grades rather than process-oriented feedback determine success. Consequently, students initially exhibited resistance or uncertainty when asked to participate in peer review or self-assessment exercises. T4 noted: “Some students are not used to assessing their own work or that of others. It takes time for them to engage fully with formative activities.” Such cultural norms may influence both the implementation and reception of formative assessment. Sarhady (2015) highlighted that shifting from product-based to process-oriented writing pedagogy requires adaptation from both teachers and learners. In the current study, repeated exposure and scaffolding helped students gradually embrace these strategies, leading to measurable improvements in writing accuracy. Another contextual consideration involves class size and resource limitations. Several instructors reported challenges providing individualized feedback due to large student numbers and limited instructional time. T2 remarked: “Time is always a constraint. Giving detailed feedback to every student can be overwhelming, but it’s necessary for meaningful improvement.” Despite these constraints, technology-assisted feedback (such as digital comments and rubric-based evaluations) emerged as an effective solution for managing workload while maintaining the quality of formative assessment. This suggests that institutional support and infrastructure play a key role in the successful implementation of formative assessment in resource-limited settings.
While the results strongly suggest that formative assessment contributed to improvements in writing accuracy, alternative explanations should be considered. First, instructors’ experience and enthusiasm may have influenced student outcomes. Highly motivated teachers who engage closely with students can naturally elicit higher levels of participation and achievement, independent of the specific strategies employed. Second, peer effects may have played a role. Students working in groups or peer review sessions may have benefited from social learning dynamics, such as observing peers’ strategies or receiving encouragement, which could partially explain performance gains. Third, the Hawthorne effect, students improving simply because they knew they were being observed, cannot be entirely ruled out.
Conclusion and implications
Summary of key findings
This study investigated the impact of formative assessment strategies on the writing accuracy of EFL learners at the tertiary level in Ethiopia. The findings demonstrate that formative assessment strategies, including teacher feedback, peer review, self-assessment, and structured support for the writing process, can substantially improve writing accuracy across multiple linguistic dimensions such as grammar, punctuation, spelling, and sentence structure. Quantitative analyses revealed significant improvements in all areas, while qualitative insights from instructors highlighted the mechanisms through which these strategies enhance learning, including targeted feedback, iterative revision, and peer engagement. Beyond measurable gains, the study underscores the broader impact of formative assessment on learner engagement and motivation. Instructors observed that students became more confident and proactive in their writing when they received structured feedback and participated in reflective or peer-driven activities. T1 noted, “When students see tangible improvements in their writing after receiving feedback, they become more motivated to apply themselves.” These findings align with sociocultural perspectives, emphasizing that learning is most effective when guided, collaborative, and scaffolded. Contextual factors were also critical in shaping outcomes. Constraints such as large class sizes, limited instructional time, and variable levels of instructor expertise influenced the degree of improvement, highlighting the importance of institutional support and targeted professional development in maximizing the benefits of formative assessment.
Limitations and implications for generalizability
Several limitations must be acknowledged. The study’s small sample size (41 students and five instructors) limits the generalizability of the findings. Differences in instructor expertise, classroom resources, and students’ prior exposure to formative assessment likely influenced outcomes. Additionally, potential observation bias may have affected instructor practices during the intervention. While these factors restrict broader applicability, the study provides valuable insights into the implementation of formative assessment in Ethiopian tertiary EFL classrooms and similar educational contexts. Despite these limitations, the findings offer both theoretical and practical contributions. They support the application of sociocultural theory and scaffolding principles in EFL writing instruction, demonstrating that structured feedback, peer interaction, and reflective exercises can significantly enhance learner outcomes.
Pedagogical implications and recommendations
The findings highlight several practical implications for instructors, curriculum designers, and institutions aiming to improve writing instruction in EFL contexts:
Structured implementation of formative assessment: Instructors should integrate teacher feedback, peer review, and self-assessment systematically. For instance, peer review checklists can focus on two to three specific accuracy criteria (e.g., punctuation and sentence structure) per session to guide students’ attention and make the process manageable.
Professional development: Teacher training programs should provide workshops on formative assessment strategies and feedback literacy, ensuring that instructors can offer clear, constructive, and scaffolded feedback to students.
Addressing resource constraints: Large classes and limited instructional time pose challenges for personalized feedback. Digital tools, such as online annotation platforms, automated grammar checkers, and collaborative document editors, can facilitate scalable feedback without sacrificing quality.
Culturally responsive approaches: Since many Ethiopian students are more familiar with summative assessment, formative assessment should be introduced gradually. Instructors should explicitly explain the purpose of each strategy, provide examples, and create opportunities for reflective discussion. Collaborative and culturally sensitive approaches can enhance engagement and acceptance.
Fostering student autonomy and motivation: Encouraging students to participate in self-assessment and structured peer review promotes ownership of learning, develops critical thinking skills, and enhances motivation. Positive reinforcement and recognition of improvement should accompany these strategies to strengthen student confidence and persistence.
Directions for future research
Future research should build on the current study to further enhance understanding and application of formative assessment in EFL contexts. Longitudinal studies are needed to examine whether improvements in writing skills such as grammar, punctuation, spelling, and sentence structure are retained over time beyond the intervention period. Comparative studies across multiple institutions and African EFL contexts could explore cultural, institutional, and pedagogical differences, identifying region-specific strategies and challenges in implementing formative assessment. Additionally, experimental research investigating different feedback modalities, including teacher-only, peer-only, and blended approaches, would clarify which combinations are most effective in improving writing accuracy. Linking formative assessment strategies to broader academic outcomes, such as overall English proficiency, student motivation, self-efficacy, and engagement, could provide a more holistic understanding of their impact. Finally, investigating technology-enhanced feedback solutions in resource-limited settings may offer insights into scalable, efficient, and sustainable approaches for enhancing writing outcomes in large classes.
Thus, this study provides compelling evidence that formative assessment strategies significantly enhance EFL learners’ writing accuracy in tertiary Ethiopian classrooms. By combining teacher feedback, peer review, self-assessment, and scaffolded writing support, students can improve grammar, punctuation, spelling, and sentence structure while gaining confidence and motivation. While contextual factors such as class size, resources, and cultural expectations influence implementation, thoughtful planning and institutional support can maximize the impact of formative assessment. The study offers clear pedagogical guidance for instructors, emphasizes the theoretical relevance of sociocultural scaffolding, and provides a roadmap for future research. Implementing formative assessment strategies thoughtfully can transform EFL writing instruction, fostering learners who are accurate, reflective, and engaged in their own development as writers.
Acknowledgements
I would also like to thank Mizan-Tepi University for assisting us in collecting data and conducting this crucial study. Also, I appreciate all respondents’ thoughtful and prompt responses during data collection. Finally, the author would like to express his gratitude to all who directly and indirectly contributed to the success of the current study.
Authors’ contributions
T.T. participated in study conception, design, analysis, interpretation of results, and manuscript drafting. Finally, the author read and approved the final manuscript.
Funding
This research did not receive any financial support from public, commercial, or not-for-profit funding agencies.
Data availability
For reasons of sensitivity and the protection of anonymity, the data that support the findings of this study are available from the corresponding author upon reasonable request.
Declarations
Ethics approval and consent to participate
This study was conducted in accordance with internationally recognized ethical standards for research involving human participants. Ethical clearance was obtained from the Institutional Review Board (IRB) of Mizan Tepi University (Reference No: MTU/IRB/546/2024). All participants were informed about the purpose and procedures of the study, and written consent was obtained prior to participation. They were assured of confidentiality, voluntary participation, and the right to withdraw at any stage without consequences.
Informed consent was obtained from all participants included in the study.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Abbreviations
Dynamic Assessment
English as a foreign language
Statistical Package for the Social Sciences
Analysis of Variance
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Abdullahi, D. M. An appraisal of the efficacy of formative assessment in english Language teaching setting in Northern Eastern Nigeria. Journal of Contemporary Education Research. 2024;5(8). https://hummingbirdjournals.com/jcer/article/view/205.
Barasa, D. Demystifying the discourse: Techniques for effective academic writing. Journal of Research and Academic Writing; 2024; 1,
Beck, DE. Performance-based assessment: Using pre-established criteria and continuous feedback to enhance a student’s ability to perform practice tasks. Journal of Pharmacy Practice; 2000; 13,
Bhat, BA; Bhat, GJ. Formative and summative evaluation techniques for the improvement of the learning process. European Journal of Business & Social Sciences; 2019; 7,
Braun, V; Clarke, V. Using thematic analysis in psychology. Qualitative Research in Psychology; 2006; 3,
Broadbent, J; Panadero, E; Boud, D. Implementing summative assessment with a formative flavor: A case study in a large class. Assessment & Evaluation in Higher Education; 2018; 43,
Bruton, A. Improving accuracy is not the only reason for writing, and even if it were…. System; 2009; 37,
Burner, T. Formative assessment of writing in English as a foreign language. Scandinavian Journal of Educational Research; 2016; 60,
Chandler, J. The efficacy of various kinds of error feedback for improvement in the accuracy and fluency of L2 student writing. Journal of Second Language Writing; 2003; 12,
Clark, I. Formative assessment: Assessment is for self-regulated learning. Educational Psychology Review; 2012; 24, pp. 205-249. [DOI: https://dx.doi.org/10.1007/s10648-011-9191-6]
Creswell, J. W., & Plano Clark, V. L. (2007). Designing and conducting mixed methods research. SAGE.
Crossley, SA; McNamara, DS. Does writing development equal writing quality? A computational investigation of syntactic complexity in L2 learners. Journal of Second Language Writing; 2014; 26, pp. 66-79. [DOI: https://dx.doi.org/10.1016/j.jslw.2014.09.006]
Cumming, A. Language assessment in education: Tests, curricula, and teaching. Annual Review of Applied Linguistics; 2009; 29, pp. 90-100. [DOI: https://dx.doi.org/10.1017/S0267190509090084]
Dmitrenko, N; Budas, I; Koliadych, Y; Poliarush, N. Impact of formative assessment on students’ motivation in foreign language acquisition. East European Journal of Psycholinguistics; 2021; 8,
Dolin, J., Black, P., Harlen, W., & Tiberghien, A. (2017). Exploring relations between formative and summative assessment. Transforming assessment: Through an interplay between practice, research and policy (pp. 53–80). Springer International Publishing.
Ellis, R; Sheen, Y; Murakami, M; Takashima, H. The effects of focused and unfocused written corrective feedback in an English as a foreign language context. System; 2008; 36,
Fathi, J; Afzali, M; Parsa, K. Self-assessment and peer-assessment in the EFL context: An investigation of writing performance and writing self-efficacy. Critical Literary Studies; 2021; 3,
Ghabool, N; Mariadass, ME; Kashef, SH. Investigating Malaysian ESL students’ writing problems on conventions, punctuation, and language use at the secondary school level. Journal of Studies in Education; 2012; 2,
Harklau, L. The role of writing in the classroom in second language acquisition. Journal of Second Language Writing; 2002; 11,
Hatziapostolou, T; Paraskakis, I. Enhancing the impact of formative feedback on student learning through an online feedback system. Electronic Journal of E-Learning; 2010; 8,
Van der Kleij, FM; Vermeulen, JA; Schildkamp, K; Eggen, TJ. Integrating data-based decision making, assessment for learning, and diagnostic testing in formative assessment. Assessment in Education: Principles, Policy & Practice; 2015; 22,
Knight, PT. Summative assessment in higher education: Practices in disarray. Studies in Higher Education; 2002; 27,
Lantolf, J. P., Thorne, S. L., & Poehner, M. E. (2014). Sociocultural theory and second Language development. In J. Herschensohn, & M. Young-Scholten (Eds.), Theories in second Language acquisition (pp. 221–240). Routledge.
Lee, EY; Chan, CK; Van Aalst, J. Students assessing their own collaborative knowledge building. International Journal of Computer-Supported Collaborative Learning; 2006; 1, pp. 57-87. [DOI: https://dx.doi.org/10.1007/s11412-006-6844-4]
Liu, Y. The effects of error feedback in second language writing. Journal of Second Language Acquisition and Teaching; 2008; 15, pp. 65-79.
Moges, B. The implementations and challenges of assessment practices for students’ learning in public selected universities, Ethiopia. Universal Journal of Educational Research; 2018; 6,
Mohamadi, Z. Comparative effect of online summative and formative assessment on EFL student writing ability. Studies in Educational Evaluation; 2018; 59, pp. 29-40. [DOI: https://dx.doi.org/10.1016/j.stueduc.2018.02.003]
Oli, I. K. (2024). Teachers’ Understanding and Practices of Continuous Assessment System in ELT Class (Doctoral dissertation, Kathmandu University School of Education).
Parr, JM; Timperley, HS. Feedback on writing, assessment for teaching and learning, and student progress. Assessing Writing; 2010; 15,
Patra, I; Alazemi, A; Al-Jamal, D; Gheisari, A. The effectiveness of teachers’ written and verbal corrective feedback during formative assessment on male language learners’ academic anxiety, academic performance, and attitude toward learning. Language Testing in Asia; 2022; 12,
Purpura, JE. Second and foreign language assessment. The Modern Language Journal; 2016; 100,
Ranalli, J; Link, S; Chukharev-Hudilainen, E. Automated writing evaluation for formative assessment of second language writing: Investigating the accuracy and usefulness of feedback as part of argument-based validation. Educational Psychology; 2017; 37,
Rodrigues, F; Oliveira, P. A system for formative assessment and monitoring of students’ progress. Computers & Education; 2014; 76, pp. 30-41. [DOI: https://dx.doi.org/10.1016/j.compedu.2014.03.001]
Safari, F; Ahmadi, A. L2 learners’ challenges in integrated writing tasks: Implications for writing teachers and developers of diagnostic assessments. The Language Learning Journal; 2025; 53,
Sarhady, T. The effect of a product/process-oriented approach to teaching and learning writing skills on university student performances. International Journal of Language and Applied Linguistics; 2015; 1,
Shrestha, P. N. (2020). Dynamic assessment of students’ academic writing. Springer International Publishing.
Taye, T; Mengesha, M. Identifying and analyzing common English writing challenges among regular undergraduate students. Heliyon; 2024; [DOI: https://dx.doi.org/10.1016/j.heliyon.2024.e36876]
Teng, LS. Explicit strategy-based instruction in L2 writing contexts: A perspective of self-regulated learning and formative assessment. Assessing Writing; 2022; 53, [DOI: https://dx.doi.org/10.1016/j.asw.2022.100645] 100645.
Trujillo, JM. Understanding who you are and how you work: The role of self-assessment. Currents in Pharmacy Teaching and Learning; 2009; 1,
Usó-Juan, E; Martínez-Flor, A; Palmer-Silveira, JC. Towards acquiring communicative competence through writing. Current Trends in the Development and Teaching of the Four Language Skills; 2006; 29, pp. 383-400.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes (Vol. 86). Harvard University Press.
Wafubwa, R. Role of formative assessment in improving students’ motivation: A systematic review of literature. The International Journal of Assessment and Evaluation; 2020; 28,
Xiao, Y; Yang, M. Formative assessment and self-regulated learning: How formative assessment supports students’ self-regulation in English Language learning. System; 2019; 81, pp. 39-49. [DOI: https://dx.doi.org/10.1016/j.system.2019.01.004]
Yan, Z; King, RB; Haw, JY. Formative assessment, growth mindset, and achievement: Examining their relations in the East and the West. Assessment in Education: Principles, Policy & Practice; 2021; 28,
Yu, S; Zheng, Y; Jiang, L; Liu, C; Xu, Y. I even feel annoyed and angry”: Teacher emotional experiences in giving feedback on student writing. Assessing Writing; 2021; 48, [DOI: https://dx.doi.org/10.1016/j.asw.2021.100528] 100528.
Zhang, Y. A study on the influence of peer evaluation on the accuracy of English writing grammar. Contemporary Education Frontiers; 2024; 2,
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.