Content area
Recent advancements in educational technology have enabled teachers to use learning analytics (LA) and flipped classrooms. The present study investigated the impact of a LA-based feedback system on students’ academic achievement and self-regulated learning (SRL) in a flipped learning (FL) environment. The study used a pretest-posttest control group quasi-experimental design with 71 pre-service teachers in the experimental group and 56 pre-service teachers in the control group, both enrolled in an information technology course. The experimental group received LA-based feedback during a 4-week training program in the FL classroom, while the control group did not receive this feedback. Data were collected using an achievement test, an online SRL questionnaire, and a student opinion form. The study found that the students’ SRL and academic achievement were not significantly affected by the LA-based feedback system in FL classrooms. In contrast, according to the qualitative research findings, students claimed the LA-based feedback helped them learn because it allowed them to monitor their learning processes.
Learning Analytics and Personalised Feedback
In online learning environments, students’ interactions are typically recorded and stored, generating digital traces known as log data. This data can be mined and analyzed to uncover learning behavior patterns, offering valuable insights into educational practices. This approach is referred to as learning analytics (LA). According to Siemens and Gašević (2012), learning analytics involves the measurement, collection, analysis, and reporting of data about learners and their contexts, aimed at understanding and optimizing both learning and the environments where it takes place. Due to its potential to enhance teaching and learning outcomes, LA has drawn a lot of interest from academics and practitioners (Kovanovic et al., 2021; Šereš et al., 2022). To better understand and support the learning process, there has been an increasing emphasis on analysing students’ online learning data (Kew & Tasir, 2021; Wong et al., 2022). Since it is still in its early stages of development, LA offers a promising method for comprehending, optimising, and enhancing the learning process (Klašnja-Milićević et al., 2020).
Using data gathered by educational tools and platforms, LA is used to make possible data-driven decisions to enhance student learning (Kovanovic et al., 2021). One of the main objectives of LA practice is to create learning experiences, such as offering personalised feedback, giving advice, and providing learning resources to meet students’ needs (Wong et al., 2022). Various features are offered by these systems, including learning suggestions, visualisations, reminders, grading, and self-assessment options (Klašnja-Milićević et al., 2020). LA empowers educators to monitor each student’s learning obstacles and the development of their success, while offering customised feedback based on individual progress (Karaoglan Yilmaz & Yilmaz, 2020). As a result, LA has become increasingly prevalent in the production of proactive feedback in online or blended learning environments (Lu et al., 2017; Pardo et al., 2019; Sedrakyan et al., 2020; Uzir et al., 2020).
Personalised digital learning systems enable teachers to tailor their instruction to students’ individual needs and learner characteristics (Hwang et al., 2020). Feedback plays a critical role in personalised learning scenarios. Scholars argue that adaptive and personalised feedback has the potential to raise student academic achievement (Ustun et al., 2022), reflect students’ developmental and motivational needs (Koenka & Anderman, 2019), and empower them for self-regulated learning (SRL; Ouyang & Jiao, 2021; Ustun et al., 2022).
LA in Flipped Learning
Flipped learning (FL) is a student-centered approach where teachers’ lectures are moved to pre-class time, allowing for more in-class practice and discussion to enhance students’ deep learning and address their learning challenges (Bergmann & Sams, 2012). The most comprehensive definition comes from the Flipped Learning Network (2014):
Flipped learning is a pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter (“What is flipped learning?” section).
In FL environments, learners are accountable for their own learning and are able to customise it to fit their needs in terms of time, level, and pace (O’Flaherty & Phillips, 2015; Staker & Horn, 2012). Because the FL model requires learners to prepare for class by studying particular topics outside of the classroom, success can be explained in part by frequent and regular access to course materials and outside content and how students navigate between materials or content of the classroom (Davies et al., 2021; Montgomery et al., 2019). Identification of the factors influencing students’ academic achievement is one of the ways LA is used in the FL model (Algayres & Triantafyllou, 2020; Lin & Hwang, 2018). However, LA makes it simpler to identify students at risk (Bayazit et al., 2022). Design of FL environments frequently incorporates LA to reveal students’ learning styles (Dooley & Makasis, 2020; Jovanović et al., 2017; Lin & Hwang, 2018; Rubio‐Fernández et al., 2019; Silva, et al., 2018).
SRL, LA, and FL
According to Pintrich (2000), SRL is a proactive and beneficial process in which students actively set their learning objectives, make an effort to manage, regulate, and check on their cognition, motivation, and behaviour, and are constrained by their objectives and the contextual elements of their environment.
The link between SRL, improved performance, and desirable learning outcomes has been established (Schunk & Greene, 2017). It has been hypothesised that guiding students (Zimmermann, 2002) who acquire their metacognitive, motivational, and behavioural attitudes independently to plan, oversee, and assess their own learning procedures can promote the use of SRL strategies and thereby enhance learning outcomes (Guo, 2022).
SRL is particularly important in the FL because students need to be actively prepared before coming to class in order to benefit from face-to-face activities (Omarchevska et al., 2024). Learners can develop strategies to improve their SRL and academic achievement in FL (Silva et al., 2018; Ustun et al., 2022). LA is also critical to measuring SRL skills by tracking and archiving students’ strategies (You, 2015). It enables instructors to understand how students interact with learning tasks, tools, and materials in their academic endeavours (Tempelaar et al., 2024). More specifically, as Gašević et al. (2016) pointed out, LA enables instructors to identify the topics students struggle with and provide personalized instructions or process-oriented feedback accordingly. Therefore, LA has great potential to directly impact students’ SRL and academic achievement.
Participating in out-of-class activities to prepare for in-class activities in flipped classrooms is a challenging aspect of this model and is associated with students’ low self-regulation (Akçayır & Akçayır, 2018). Additionally, students complain about not receiving enough feedback out-of-class in the FL model (Birgili & Demir, 2022). Given the difficulty of participating in out-of-class activities in FL, students’ interaction behaviors can be assessed through LA obtained from the learning management system (LMS; Yang et al., 2021). Learning analytics dashboards (LAD) are successful in giving students feedback, but sometimes the complexity of LAD designs may be overwhelming for students (Ramaswami et al., 2023). Therefore, LA-based personalized feedback can be provided out-of-class in FL, and these factors constitute the rationale for this study.
Studies on the impact of LA on student achievement and SRL in the online learning environment have been conducted in this context (Çebi & Güyer, 2020; Kim et al., 2018; Li & Tsai, 2017; Lim et al., 2023). However, very few empirical studies (Silva et al., 2018) as well as subsequent evaluations of student achievement (Ustun et al., 2022) have investigated the impact of LA on students’ SRL. On the other hand, there is little proof that the claimed potential for improving learning practices has been successfully transferred to higher education practice, despite the fact that many studies on LA highlight this potential (Šereš et al., 2022).
Based on the limited literature, we decided to investigate the impact of a flipped learning based learning analytics (FLLA) feedback system on students’ SRL and academic achievement.
There were three research questions:
- Are there significant differences between students in a traditional FL environment and those supported with FLLA-based feedback in terms of their academic achievement?
- Are there significant differences between students in a traditional FL environment and those supported with FLLA-based feedback in terms of their SRL?
- What are the pre-service teachers’ perspectives on the FLLA-based feedback?
Design of the Study
The research followed a quasi-experimental design. With quasi-experimental designs, a popular approach to quasi-experiments, an experimental group and a control group are selected without random assignment. Both groups take a pretest, the experimental group receives a treatment, and then both groups take a posttest (Creswell, 2009).
In our study, it was examined whether there was a difference between the experimental and control groups determined by selection without random assignment. While the students in the control group received traditional FL training, the students in the experimental group received FLLA-based feedback. Before and after the 4-week training, pretests and posttests were administered to measure the academic achievement and SRL of the students. The academic achievement test was developed for this specific study, and the Online Self-Regulated Learning Questionnaire (OSLQ) was used to measure students’ SRL. The OLSQ was developed by Lan et al (2004), then shortened by Barnard et al (2008) and adapted into Turkish by Kilis and Yıldırım (2018). The validity and reliability of these data collection tools were tested, and both tools were accepted as valid and reliable. Table 1 shows the study design.
Table 1
Design of the Research Model
| Group | Assignment | Pretest | Treatment | Posttest |
| Experimental | R | Achievement test OSLQ | FLLA-based feedback | Achievement test OSLQ Student opinion form |
| Control | R | Achievement test OSLQ | FL | Achievement test OSLQ |
Note. R = unbiased assignment; OSLQ = Online Self-Regulated Learning Questionnaire; FLLA = flipped learning based learning analytics; FL = flipped learning.
Content and the Procedure
The necessary learning environments for the experimental and control groups were created to conduct the training. A total of 3 hours of instruction were given in the FL environment to both groups, including 1 hour of face-to-face instruction and 2 hours of online instruction. Before the experimental process began, students in both groups received training on how to use the LMS (Moodle) and the FL model, and guidance on the roles and responsibilities in this model. Table 2 provides a general description of the activities in this FL environment.
Table 2
Weekly Activities in the FL Environment
| Week | Topic | Activities | |
| At home | In class | ||
| 1 | Introduction Pretest | Readiness training OSLQ Achievement test | |
| 2 | What is Microsoft Excel? Table data entry–Formatting | Weeks 2–5: Documents (pdf) Microsoft Excel Videos | Weeks 2–5: Quizzes Kahoot games In-class applications |
| 3 | Data formatting Adding a chart | ||
| 4 | Formula usage Functions | ||
| 5 | Formulas functions Grade calculation | ||
| 6 | Posttest | OSLQ Achievement test Student opinion form | |
Note. OSLQ = Online Self-Regulated Learning Questionnaire.
The various learning activities shown in Table 2 (resource document, video, quiz, classroom practice) were created within the parameters of the study.
For the experimental group trained with FL, an instructional design based on LA was created. Learners were supported with LA feedback. Figure 1 shows an overview of the learning activities of the experimental and control groups.
Figure 1
Features of the Learning Environments for the Experimental and Control Groups
[Image omitted]
Support/Intervention Strategies With LA Feedback
Providing one-to-one advice in LA practice (in areas such as learning paths, resources, feedback, and workspaces) gives learners control of their learning processes and progress (Wong et al., 2022). The necessity of including individualised feedback in LA is consistently suggested and used in research (Guo, 2022; Ustun et al., 2022). In this study, the instructors monitored the participation status of each student in the experimental group and sent an Individual Learning Analytics Feedback Report by e-mail at the end of each week. They also directed students to look at their e-mail messages in class time to review the reports. Examples from some of the feedback reports, in the original Turkish language, are shown in Figure 2.
Figure 2
Samples of the Individual Learning Analytics Feedback Report
[Image omitted]
Note. Panel A: Analysis of the students’ individual data of general performance and engagement. Panel B: Analysis of LMS/Moodle interaction and in-class applications (Microsoft Excel). Panel C: Analysis of in-class activities (Quiz and Kahoot). Panel D: Personalised textual feedback based on recent performance and engagement. All feedback was given in Turkish. Data shown in the panels is anonymous.
The Individual Learning Analytics Feedback Reports featured analysis of student performance and engagement with Kahoot, in-class practice, quizzes, Moodle interaction numbers and times, and general evaluation. There were three types of feedback:
- Statistical feedback from an analysis of the student’s individual data of performance and engagement.
- Feedback based on descriptive statistics and graphs from analysis of individual and class data to facilitate students’ understanding and help them compare their online activities with those of their classmates.
- Textual feedback including sending each student a personalised message to assess their recent performance.
Participants
The 127 participants of the study were undergraduate students who took the Information Technologies course in the fall semester of the 2021–2022 academic year at a foundation university in Ankara, Turkey. All students participating in the research were pre-service teachers–first-year students in the Faculty of Education. Initially, there were 130 participants, but three were excluded because they did not participate in the survey and process evaluation. There were 56 participants (52 females and four males) in the control group and 71 students (61 females and 10 males) in the experimental group.
Data Collection Tools
LMS Interaction Analytics
Students in the experimental group received the Individual Learning Analytics Feedback Report for 4 weeks. Data were obtained from the LMS (Moodle) and analysed. Each student’s frequency of course access, course sources, and activities (videos, documents, and Microsoft Excel worksheets) for out-of-class interaction behaviours were examined. As for interaction behaviours during face-to-face instruction, in-class applications (score and completion status), a mini-exam (score and completion time), and a Kahoot activity score were examined. Class averages of these interaction behaviours were also calculated.
Achievement Test
A suitable number of questions was chosen for the achievement test items based on the time allocated for each topic during the course, and 35 multiple-choice questions were also created. A measurement and evaluation expert as well as three subject-matter specialists evaluated these questions to ensure content validity (Sireci & Faulkner-Bond, 2014). According to the experts, some changes and revisions were necessary. Three items were rephrased, and five items were eliminated. Additionally, in accordance with the feedback offered by the measurement and evaluation expert, items testing the same subject were grouped together and listed in a linear fashion.
In the end, separate from the study groups, an achievement test consisting of 30 multiple-choice questions was administered to 83 students enrolled in the Information Technologies course, and item analysis was performed. The student scores were ranked from high to low for the item analysis. Two student groups were identified: the 27% lower group and the 27% upper group, based on score rankings. The necessary formulas were used to determine the item difficulty and discrimination indices of the questions according to these lower and upper groups. The analysis led to the creation of a test with 30 items and a difficulty index ranging from 0.34 to 0.98. The discrimination index of the items varied between 0.29 and 0.84. One item, which had the lowest discrimination index (0.29), was added to the test after correction. The Kuder-Richardson Formula 20 (KR20) was applied to measure the internal reliability of the test, and the reliability coefficient was found to be 0.76. As this was within the acceptable ranges (Kline, 2013), the test was deemed to have sufficient internal reliability. The test was administered to the students as a pretest before the intervention and as a posttest after the intervention.
Online Self-Regulated Learning Questionnaire (OSLQ)
The OSLQ was used to gather data on learners’ perceptions of SRL in the online learning environment. Lan et al. (2004) originally created the OLSQ, which Barnard et al. (2008) later condensed to 24 Likert-type items. The questionnaire was adapted into Turkish by Kilis and Yıldırım (2018) and validated with data from 321 students. Internal consistency coefficients calculated for the scale’s reliability, as shown by Cronbach’s alpha coefficients, ranged from .67 to .87, and .95 for the entire scale. Goal-setting, environment structuring, task strategies, time management, help-seeking, and self-evaluation are the six subdimensions of the 5-point Likert-type scale.
In this study, the OSLQ factor structure was tested with confirmatory factor analysis (CFA) by administering it to 130 students. Fit indices were computed for the construct validity factor analysis and the six-factor model, which are the same as the original scale. The chi-square value, which is accepted as the initial fit index, was examined first and found to be significant (χ2 = 448.31, SD = 237, p = .00). The χ2/SD ratio was 1.89, normed fit index (NFI) = 0.92, non-normed fit index (NNFI) = 0.95, comparative fit index (CFI) = 0.78, goodness-of-fit index (GFI) = 0.78, adjusted goodness-of-fit-index (AGFI) = 0.72, standardised root mean square residual index (SRMS) = 0.07, root mean square error of approximation (RMSEA) = .0.05. Based on the fit indices, it was determined that the observed values were mostly within acceptable value ranges (Schermelleh-Engel et al., 2003). The GFI ranges from zero to one and is affected by sample size, with larger samples yielding more appropriate values (Schumacker & Lomax, 2004). The factor loadings of the items were investigated using the scale’s path diagram. The factor weights of the items were found to range from 0.35 (item 20) to 0.83 (item 18).
Student Opinion Form
After the experimental research process, student opinions were collected from the experimental group. Depending on the nature of the response, a semi-structured interview form was employed to gather additional data. Regarding the effectiveness of the weekly feedback reports, opinions and suggestions were solicited via the interview form. Sample questions asked were: How did the FLLA-based feedback contribute to your learning? Were there any disadvantages? A total of 56 pre-service teachers in the experimental group answered these questions.
Data Analysis
In this study, cluster sampling was used within the scope of probability sampling. First of all, the population was divided into clusters. The clusters were selected randomly using simple random sampling techniques to form experimental and control groups as per Alvi (2016). To ascertain whether there was a significant difference between the groups in terms of academic achievement and the SRL subdimensions pretest scores, an independent sample t -test was conducted.
Table 3
Independent Samples t-Test Results of Achievement and SRL PreTest Scores of Experimental and Control Groups
| Scale | Group | x̄ | s | SD | t | p |
| Academic achievement | Experimental | 14.22 | 3.81 | 125 | -1.21 | .23 |
| Control | 13.21 | 4.99 | ||||
| Goal setting | Experimental | 3.84 | 0.59 | 125 | 0.50 | .62 |
| Control | 3.89 | 0.54 | ||||
| Environment structuring | Experimental | 4.30 | 0.55 | 125 | 0.73 | .46 |
| Control | 4.37 | 0.54 | ||||
| Task strategies | Experimental | 3.51 | 0.72 | 125 | -0.56 | .57 |
| Control | 3.44 | 0.65 | ||||
| Time management | Experimental | 3.44 | 0.80 | 125 | 0.25 | .80 |
| Control | 3.47 | 0.69 | ||||
| Help-seeking | Experimental | 3.86 | 0.60 | 125 | -0.73 | .47 |
| Control | 3.78 | 0.73 | ||||
| Self-evaluation | Experimental | 3.75 | 0.65 | 125 | -0.13 | .90 |
| Control | 3.73 | 0.60 |
Note. Experimental group n = 71. Control group n = 56.
Table 3 shows that the differences in the achievement and attitude scale pretest mean scores of the experimental and control groups were not significant at the level of.05. This result indicates that both groups could take part in the experiment.
A descriptive analysis of the distribution of the pretest and posttest results of the research’s dependent variable, according to group, was carried out prior to choosing the analyses that would be carried out. In this study, the normality of the data was assessed using skewness and kurtosis coefficients as well as graphical evaluations. According to George and Mallery (2019), a normal distribution is defined as having kurtosis and skewness values between -2 and +2. The covariance analysis (ANCOVA) was used to assess the significance of the difference in posttest scores between the groups. Experimental studies frequently employ a one-way analysis of covariance, in which pretest results are controlled as a covariate (Büyüköztürk, 2021). Before performing the ANCOVA, it was verified that the variance of the dependent variable across all groups was equal, that the covariate had a linear relationship with the dependent variable across all groups, and that the slopes of the regression curves across all groups were equal in terms of the dependent variable’s prediction based on the covariance.
The academic achievement variables’ pretest and posttest skewness values ranged between -0.69 and -0.33 (SE= 0.21), respectively, and their kurtosis values ranged between 0.38 and 1.10 (SE = 0.42). Examining the ANCOVA assumptions revealed that the variances of the posttest scores from Levene’s Test were equal (F = 0.529, p = .468), and the scatter plot displayed linear relationships. Additionally, there was no difference in the slopes of the regression curves used to predict academic achievement posttest results based on academic achievement pretest results according to group (F = 0.218, p = .641).
Skewness and kurtosis were visually evaluated for the OSLQ subdimensions of goal setting, environment structuring, task strategies, time management, help-seeking, and self-evaluation. Data from three participants that fell into the extreme ranges, based on the z -score results, were deleted. The skewness values of subdimensions in the pretest and posttest varied between -0.79 and 0.15 (SE = 0.21), respectively, and their kurtosis values ranged from between 0.26 and 2.00 (SE = 0.42). All scale subdimensions’ presumptions were tested prior to ANCOVA. The posttest scores of the six subdimensions of the OSLQ produced equal variances from Levene’s Test. Respectively, these values were: F = 1.074, p = .302; F = 0.945, p = .333; F = 0.921, p = .339; F = 1.075, p = .302; F = 2.866, p = .093; and F = 0.556, p = .457. The scatter plot revealed linear relationships. In addition, the slopes of the regression curves used to predict posttest academic achievement results based on pretest academic achievement scores according to group were equal. For each subdimension respectively, the values were: F = 0.439, p = .509; F = 2.133, p = .147; F = 0.973, p = .326; F = 0.073, p = .787; F = 0.737, p = .392; and F = 0.023, p = .879. These analyses revealed that the subdimensions of the academic achievement test and the online SRL scale met the ANCOVA assumptions.
Results
Findings Related to Academic Achievement
When determining whether there was a difference between the academic achievement posttest scores of the groups to which various teaching strategies were applied, an ANCOVA was carried out. The academic achievement pretest scores, which have an impact on the posttests, were used as a covariate, and so, first, the corrected means of the posttest scores were calculated. The academic achievement posttest scores of the experimental group were found to be slightly higher than those of the control group. Results are shown in Table 4.
Table 4
Corrected Means of Academic Achievement Posttest Scores
| Group | n | x̄ | Corrected x̄ |
| Experimental | 71 | 22.96 | 22.90 |
| Control | 56 | 22.04 | 22.11 |
Table 5 provides the findings of the ANCOVA of the posttest scores. The academic achievement posttest scores of the students in the experimental and control groups did not differ significantly when the effect of the academic achievement pretest scores was controlled (F = 2.229, p = .138).
Table 5
Results of ANCOVA on Academic Achievement Posttest Scores
| Source | SS | df | MS | F | p |
| Pretest | 50.47 | 1 | 50.47 | 5.90 | .017 |
| Group | 19.06 | 1 | 19.06 | 2.23 | .138 |
| Error | 1,060.34 | 124 | 8.55 | ||
| Total | 65,724.00 | 127 |
Note. ANCOVA = analysis of covariance.
Findings Related to SRL
To control the effect of the pretest scores of the subdimensions of SRL, the corrected means of the posttest scores were calculated, and these are given in Table 6. The posttest scores for the experimental group’s self-regulation subdimensions, with the exception of time management, were similar to those of the control group when the corrected means were taken into consideration. The corrected mean scores of the experimental group for time management (x̄ = 3.75) and help seeking (x̄ = 4.02) were higher than the control group scores for the same variables (x̄ = 3.65; x̄ = 3.91, respectively).
Table 6
Corrected Means of SRL Posttest Scores
| Factor | Group | x̄ | Corrected x̄ |
| Goal setting | Experimental | 4.00 | 4.01 |
| Control | 4.04 | 4.03 | |
| Environment structuring | Experimental | 4.21 | 4.23 |
| Control | 4.30 | 4.28 | |
| Task strategies | Experimental | 3.69 | 3.68 |
| Control | 3.67 | 3.69 | |
| Time management | Experimental | 3.74 | 3.75 |
| Control | 3.66 | 3.65 | |
| Help-seeking | Experimental | 4.04 | 4.02 |
| Control | 3.89 | 3.91 | |
| Self-evaluation | Experimental | 3.94 | 3.94 |
| Control | 3.99 | 3.99 |
Note. Experimental group n = 71. Control group n = 56.
The SRL subdimensions of the corrected posttest mean scores for the experimental and control groups were compared using ANCOVA to determine whether there was a statistically significant difference. The analysis showed that there was no statistically significant difference between the experimental and control groups. Table 7 presents these findings.
Table 7
Results of ANCOVA on SRL Posttest Scores
| Source | SS | df | MS | F | p | |
| Goal setting | Pretest | 45236.00 | 1 | 45236.00 | 22947.00 | .000 |
| Group | 0.01 | 1 | 0.01 | 0.03 | .863 | |
| Error | 33018.00 | 124 | 0.27 | |||
| Total | 2087.2 | 127 | ||||
| Environment structuring | Pretest | 9988.00 | 1 | 9988.00 | 32438.00 | .000 |
| Group | 0.01 | 1 | 0.09 | 0.31 | .579 | |
| Error | 38182.00 | 124 | 0.31 | |||
| Total | 2344.50 | 127 | ||||
| Task strategies | Pretest | 10542.00 | 1 | 10542 | 24.26 | .000 |
| Group | 0.06 | 1 | 0.06 | 0.01 | .911 | |
| Error | 53881.00 | 124 | 0.44 | |||
| Total | 1787188.00 | 127 | ||||
| Time management | Pretest | 8495.00 | 1 | 8495.00 | 17667.00 | .000 |
| Group | 0.27 | 1 | 0.27 | 0.56 | .454 | |
| Error | 59623.00 | 124 | 0.48 | |||
| Total | 1812954.00 | 127 | ||||
| Help-seeking | Pretest | 8554.00 | 1 | 8554.00 | 23014.00 | .000 |
| Group | 0.365 | 1 | 0.37 | 0.98 | .324 | |
| Error | 46.09 | 124 | 0.37 | |||
| Total | 2059375.00 | 127 | ||||
| Self-evaluation | Pretest | 4784.00 | 1 | 4784.00 | 12934.00 | .000 |
| Group | 0.08 | 1 | 0.08 | 0.23 | .634 | |
| Error | 45861.00 | 124 | 0.37 | |||
| Total | 2046875.00 | 127 | ||||
The opinions of the students in the experimental group were collected following the implementation of FLLA, and the opinions were then divided by content analysis into two themes: positive and negative. Table 8 lists the categories, some sample statements, and their frequency for students’ positive perceptions.
Table 8
Frequency of Positive Views on the Impact of Weekly Feedback Reports
| Category | Sample statement | f |
| Positive effects on learning | “It aided in identifying my knowledge gaps.” | 10 |
| “By identifying my knowledge gaps and creating a study plan, it assisted me in resolving this shortcoming.” | 8 | |
| “It helped me learn better.” | 6 | |
| Contentedness | Satisfied (general expression: “good,” “positive”) | 20 |
| Providing motivation | Motivated (general expression: “motivated,” “encouraged”) | 9 |
| “It encouraged me to study for the upcoming week.” | 4 | |
| “I was motivated by realising my success and how well I performed compared to the rest of the class.” | 2 | |
| “It inspired me to work harder and conduct more research.” | 1 | |
| “It helped me to love the lesson. I began to like the lesson.” | 1 | |
| Raising awareness about improvement | “It gave me the opportunity to monitor my progress.” | 3 |
| “It made me aware of my improvement.” | 2 |
The FLLA-based feedback was generally deemed satisfactory by the students. Nevertheless, there were also negative remarks, though they were few in comparison to the positive ones. While two participants said, “I am upset to receive negative comments in the feedback,” another said, “It doesn't have any disadvantages, but it is frustrating to see my performance decline week by week. However, the instructor knows that this should motivate us rather than annoy us.” Only one student admitted, “I initially had trouble following the reports, but eventually I got used to it.”
Discussion
The purpose of this study was to investigate how students’ academic achievement and SRL are affected by a feedback system based on FLLA. Although the mean scores of the students in the experimental group were higher than those of the students in the control group, it was found that there was no statistically significant difference between them as has been the case in other studies (Kim et al, 2016; Park & Jo, 2015). However, contrary to these findings, some studies (Aguilar et al., 2021; Roberts et al., 2017; Ustun et al., 2021) have shown that students who have access to a LAD perform better academically than students who do not.
The mean scores of goal setting, environment structuring, task strategies, help-seeking, and self-evaluation on the SRL scale were approximately equal in the experimental and control groups. However, the experimental group’s mean time management and help seeking scores were higher than the control group.. When the corrected posttest scores of the groups from the sub-dimensions of the SRL scale were examined, no significant difference was identified in the mean scores of the experimental group compared to the control group. This result demonstrates that a LA-based feedback system has no influence on students’ SRL. In a related study Kim et al. (2016) stated that a LAD is interesting and impressive at first glance, but it is insufficient to motivate students to revisit it. However, Lu et al. (2017), Silva et al. (2018), and Lim et al. (2023) contended that using LA in FL can foster SRL by assisting students in identifying techniques that will boost their academic achievement.
The findings revealed that the students who did not receive FLLA-based feedback engaged in learning activity patterns similar to those of students who did, which may help to explain why the students in all groups achieved similar learning outcomes. There are some possible reasons why this should be the case.
For one, LA is directly related to student engagement in online courses and engagement is the most significant predictor of achievement in the FL (Polat et al., 2022). The fact that the frequency of students’ access to course elements was close between the groups may explain the lack of a significant difference in their achievement. However, Doo and Park (2024) showed that one of the factors affecting success in the FL was time and space among the students’ resource management strategies, and that resource access was not related to success.
Another reason could be that the experimental period of 4 weeks may not have been sufficient for the experimental process. Perhaps more significant results could be achieved if the training were provided for a longer period of time—for example, between 10 and 16 weeks (Fidan, 2023; Shen & Chang, 2023; Ustun et al., 2021). In fact, in similar studies, Pardo et al. (2019) conducted their experimental periods for 3 years. However, in the design of the FL, there is no clear distinction in how long a course should be in order for the FL structure to be effective. As a matter of fact, there are studies in which the effect on academic achievement in FL environments as short as 4 or 6 weeks has been shown to be positive (Karaca et al., 2024; Polat & Taslibeyaz, 2023).
Another factor that may have affected the results is student characteristics (Gašević et. al., 2016). According to Kim et al. (2016), instead of providing the same feedback to all students, grouping students in clusters based on their learning characteristics before the training and intervening afterwards would be more effective. As in other studies (Karaoglan Yilmaz, 2022; Ustun et al., 2021), while FLLA appears to increase learner motivation because it allows students to control their own learning processes, it may also raise students’ stress levels by making them realise their failures (Ustun et al., 2021). Therefore, the frequency and timing of LA-based reports, along with guiding advice and suggestions, should be designed according to student preferences, as this positively affects student motivation and participation in the course (Sedrakyan et al., 2020; Wang & Han, 2021).
Conclusion
This study was conducted to determine the effect of FLLA-based feedback on academic performance and SRL. Within the scope of the study, the in-class and out-of-class participation of the students in the experimental group was monitored and weekly individual feedback reports with the LA statistics obtained from the LMS were sent to students. Although the average scores of the students in the experimental group were higher than the students in the control group, no significant difference was found in terms of academic achievement and SRL. This shows that the FLLA-based feedback system had no effect on students’ achievement and SRL. In contrast, according to the qualitative research findings, students claimed the feedback helped them because it allowed them to monitor their own learning progress.
This study presents several limitations. First of all, the data only included information from one particular course at one university. Future research should be done to confirm that student subpopulations and learning patterns are valid in other contexts.
Many facets of LA may be represented by the interaction behaviours looked at in the individualised feedback report examined in this study. However, additional interaction behaviours could also be investigated, such as submitting homework, taking part in a live lesson, participating in a discussion, and answering questions.
Four weeks may not have been sufficient for the experimental process. Instead, if more training were provided, perhaps more accurate results could be attained. In future studies, the effect of variables related to both student-system interaction and where and when students access pre-class activities on achievement could be taken into consideration.
Finally, there were probably other variables to consider. The homogeneous assignment of students from various programmes to the experimental and control groups, the use of the same course materials and instructor, and the students’ perspectives on the information technology course as controlling factors could all result in different interpretations. Importantly, in this study, the lecturers developed the personalised feedback reports manually via the LMS and in-class activities. Using software that could be integrated into the LMS or other learning platform that reports LA indicators and sends these automatically to students on a regular basis could allow for a more systematic process and the elimination of potential flaws that could be created in manually prepared reports.
Aguilar, S. J., Karabenick, S. A., Teasley, S. D., & Baek, C. (2021). Associations between learning analytics dashboard exposure and motivation and self-regulated learning. Computers & Education, 162, Article 104085. https://doi.org/10.1016/j.compedu.2020.104085
Akçayır, G., & Akçayır, M. (2018). The flipped classroom: A review of its advantages and challenges. Computers & Education, 126, 334-345. https://doi.org/10.1016/j.compedu.2018.07.021
Algayres, M. G., & Triantafyllou, E. (2020). Learning analytics in flipped classrooms: A scoping review. Electronic Journal of e-Learning, 18(5), 397-409. https://doi.org/10.34190/JEL.18.5.003
Alvi, M. (2016). A manual for selecting sampling techniques in research (MPRA Paper No. 70218). Munich Personal RePEc Archive. https://mpra.ub.uni-muenchen.de/70218/1/MPRA_paper_70218.pdf
Barnard, L., Paton, V., & Lan, W. (2008). Online self-regulatory learning behaviors as a mediator in the relationship between online course perceptions with achievement. The International Review of Research in Open and Distributed Learning, 9(2). https://doi.org/10.19173/irrodl.v9i2.516
Bayazit, A., Apaydin, N., & Gonullu, I. (2022). Predicting at-risk students in an online flipped anatomy course using learning analytics. Education Sciences, 12(9), Article 581. https://doi.org/10.3390/educsci12090581
Bergmann, J., & Sams, A. (2012). Flip your classroom: Reach every student in every class every day. International Society for Technology in Education.
Birgili, B., & Demir, Ö. (2022). An explanatory sequential mixed-method research on the full-scale implementation of flipped learning in the first years of the world’s first fully flipped university: Departmental differences. Computers & Education, 176, Article 104352. https://doi.org/10.1016/j.compedu.2021.104352
Büyüköztürk, Ş. (2021). Handbook of data analysis for social sciences (29th ed.), Pegem.
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. Sage.
Çebi, A., & Güyer, T. (2020). Students’ interaction patterns in different online learning activities and their relationship with motivation, self-regulated learning strategy and learning performance. Education and Information Technologies, 25, 3975-3993. https://doi.org/10.1007/s10639-020-10151-1
Davies, R., Allen, G., Albrecht, C., Bakir, N., & Ball, N. (2021) Using educational data mining to identify and analyze student learning strategies in an online flipped classroom. Education Sciences, 11(11), Article 668. https://doi.org/10.3390/educsci11110668
Doo, M. Y., & Park, Y. (2024). Pre-class learning analytics in flipped classroom: Focusing on resource management strategy, procrastination and repetitive learning. Journal of Computer Assisted Learning, 40(3), 1231-1245. https://doi.org/10.1111/jcal.12946
Dooley, L., & Makasis, N. (2020). Understanding student behavior in a flipped classroom: Interpreting learning analytics data in the veterinary pre-clinical sciences. Education Sciences, 10(10), Article 260. https://doi.org/10.3390/educsci10100260
Fidan, M. (2023). The effects of the microlearning-supported flipped classroom on pre-service teachers’ learning performance, motivation and engagement. Education and Information Technologies, 28, 12687-12714. https://doi.org/10.1007/s10639-023-11639-2
Flipped Learning Network. (2014). The four pillars of F-L-I-P™. https://flippedlearning.org/definition-of-flipped-learning/
Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2016). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success. The Internet and Higher Education, 28, 68-84. https://doi.org/10.1016/J.IHEDUC.2015.10.002
George, D., & Mallery, P. (2019). IBM SPSS statistics 26 step by step: A simple guide and reference. Routledge.
Guo, L. (2022). Using metacognitive prompts to enhance self‐regulated learning and learning outcomes: A meta‐analysis of experimental studies in computer‐based learning environments. Journal of Computer Assisted Learning, 38(3), 811-832. https://doi.org/10.1111/jcal.12650
Hwang, G.-J., Xie, H., Wah, B. W., & Gašević, D. (2020). Vision, challenges, roles and research issues of artificial intelligence in education. Computers and Education: Artificial Intelligence, 1, Article 100001. https://doi.org/10.1016/j.caeai.2020.100001
Jovanović, J., Gašević, D., Dawson, S., Pardo, A., & Mirriahi, N. (2017). Learning analytics to unveil learning strategies in a flipped classroom. The Internet and Higher Education, 33(4), 74-85. http://dx.doi.org/10.1016/j.iheduc.2017.02.001
Karaca O., Çınarcık, B. Ş., Aşık, A., Sağlam, C., Yiğit, Y., Hakverdi, G., Yetkiner, A. A., & Ersin, N. (2024). Impact of fully online flipped classroom on academic achievement in undergraduate dental education: An experimental study. European Journal of Dental Education, 28(1), 212-226. https://doi.org/10.1111/eje.12938
Karaoglan Yilmaz, F. G. (2022). Utilizing learning analytics to support students’ academic self-efficacy and problem-solving skills. The Asia-Pacific Education Researcher, 31, 175-191. https://doi.org/10.1007/s40299-020-00548-4
Karaoglan Yilmaz, F. G., & Yilmaz, R. (2020). Student opinions about personalized recommendation and feedback based on learning analytics. Technology, Knowledge and Learning, 25, 753-768. https://doi.org/10.1007/s10758-020-09460-8
Kew, S. N., & Tasir, Z. (2021). Learning analytics in online learning environment: A systematic review on the focuses and the types of student-related analytics data. Technology, Knowledge and Learning, 27, 405-427. https://doi.org/10.1007/s10758-021-09541-2
Kilis, S., & Yıldırım, Z. (2018). Investigation of community of inquiry framework in regard to self-regulation, metacognition and motivation. Computers & Education, 126, 53-64. https://doi.org/10.1016/j.compedu.2018.06.032
Kim, D., Yoon, M., Jo, I.-H., & Branch, R. M. (2018). Learning analytics to support self-regulated learning in asynchronous online courses: A case study at a women’s university in South Korea. Computers & Education, 127, 233-251. https://doi.org/10.1016/j.compedu.2018.08.023
Kim, J., Jo, I.-H., & Park, Y. (2016). Effects of learning analytics dashboard: analyzing the relations among dashboard utilization, satisfaction, and learning achievement. Asia Pacific Education Review, 17, 13-24. https://doi.org/10.1007/s12564-015-9403-8
Klašnja-Milićević, A., Ivanović, M., & Stantić, B. (2020). Designing personalized learning environments—The role of learning analytics. Vietnam Journal of Computer Science, 7(3), 231-250. https://dx.doi.org/10.1142/S219688882050013X
Kline, P. (2013). Handbook of psychological testing. Routledge.
Koenka, A. C., & Anderman, E. M. (2019). Personalized feedback as a strategy for improving motivation and performance among middle school students. Middle School Journal, 50(5), 15-22. https://doi.org/10.1080/00940771.2019.1674768
Kovanovic, V., Mazziotti, C., & Lodge, J. (2021). Learning analytics for primary and secondary schools. Journal of Learning Analytics, 8(2), 1-5. https://doi.org/10.18608/jla.2021.7543
Lan, W. Y., Bremer, R., Stevens, T., & Mullen, G. (2004, April 7-8). Self-regulated learning in the online environment [Paper presentation]. 2004 Annual Meeting of the American Educational Research Association, San Diego, CA, United States.
Li, L.-Y., & Tsai, C.-C. (2017). Accessing online learning material: Quantitative behavior patterns and their effects on motivation and learning performance. Computers & Education, 114, 286-297. http://dx.doi.org/10.1016/j.compedu.2017.07.007
Lim, L., Bannert, M., van der Graaf, J., Singh, S., Fan, Y., Surendrannair, S., Rakovic, M., Molenaar, I., Moore, J., & Gašević, D. (2023). Effects of real-time analytics-based personalized scaffolds on students’ self-regulated learning. Computers in Human Behavior, 139, Article 107547. https://doi.org/10.1016/j.chb.2022.107547
Lin, C.-J., & Hwang, G.-J. (2018). A learning analytics approach to investigating factors affecting EFL students’ oral performance in a flipped classroom. Journal of Educational Technology & Society, 21(2), 205-219. https://www.jstor.org/stable/26388398
Lu, O. H. T., Huang, J. C. H., Huang, A. Y. Q., & Yang, S. J. H. (2017). Applying learning analytics for improving students engagement and learning outcomes in an MOOCs enabled collaborative programming course. Interactive Learning Environments, 25(2), 220-234. https://doi.org/10.1080/10494820.2016.1278391
Montgomery, A. P., Mousavi, A., Carbonaro, M., Hayward, D. V., & Dunn, W. (2019). Using learning analytics to explore self‐regulated learning in flipped blended learning music teacher education. British Journal of Educational Technology, 50(1), 114-127. https://doi.org/10.1111/bjet.12590
O’Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. The Internet and Higher Education, 25, 85-95. https://doi.org/10.1016/j.iheduc.2015.02.002
Omarchevska, Y., van Leeuwen, A., & Mainhard, T. (2024, February 22). The flipped classroom: First-time student preparatory activity patterns and their relation to course performance and self-regulation. Journal of Computing in Higher Education. https://doi.org/10.1007/s12528-024-09399-0
Ouyang, F., & Jiao, P. (2021). Artificial intelligence in education: The three paradigms. Computers and Education: Artificial Intelligence, 2, Article 100020. https://doi.org/10.1016/j.caeai.2021.100020
Pardo, A., Jovanovic, J., Dawson, S., Gašević, D., & Mirriahi, N. (2019). Using learning analytics to scale the provision of personalised feedback. British Journal of Educational Technology, 50(1), 128-138. https://doi.org/10.1111/bjet.12592
Park, Y., & Jo, I.-H. (2015). Development of the learning analytics dashboard to support students’ learning performance. Journal of Universal Computer Science, 21(1), 110-133. https://doi.org/10.3217/jucs-021-01-0110
Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 451-502). Academic Press.
Polat, E., Hopcan, S. & Arslantaş, T. K. (2022). The association between flipped learning readiness, engagement, social anxiety, and achievement in online flipped classrooms: A structural equational modeling. Education and Information Technologies, 27, 11781-11806. https://doi.org/10.1007/s10639-022-11083-8
Polat, H., & Taslibeyaz, E. (2023). Examining interactive videos in an online flipped course context. Education and Information Technologies, 29, 5833-5856. https://doi.org/10.1007/s10639-023-12048-1
Ramaswami, G., Susnjak, T., & Mathrani, A. (2023). Effectiveness of a learning analytics dashboard for increasing student engagement levels. Journal of Learning Analytics, 10(3), 115-134. https://doi.org/10.18608/jla.2023.7935
Roberts, L. D., Howell, J. A., & Seaman, K. (2017). Give me a customizable dashboard: Personalized learning analytics dashboards in higher education. Technology, Knowledge and Learning, 22, 317-333. https://doi.org/10.1007/s10758-017-9316-1
Rubio‐Fernández, A., Muñoz‐Merino, P. J., & Delgado Kloos, C. (2019). A learning analytics tool for the support of the flipped classroom. Computer Applications in Engineering Education, 27(5), 1168-1185. https://doi.org/10.1002/cae.22144
Schermelleh-Engel, K., Moosbrugger, H., & Müller, H. (2003). Evaluating the fit of structural equation models: Tests of significance and descriptive goodness-of-fit measures. Methods of Psychological Research Online, 8(2), 23-74.
Schumacker, R. E., & Lomax, R. G. (2004). A beginner’s guide to structural equation modeling, Mahwah, NJ: Lawrence Erlbaum Associates.
Schunk, D. H., & Greene, J. A. (2017). Historical, contemporary, and future perspectives on self-regulated learning and performance. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (pp. 1-15). Routledge.
Sedrakyan, G., Malmberg, J., Verbert, K., Järvelä, S., & Kirschner, P. A. (2020). Linking learning behavior analytics and learning science concepts: Designing a learning analytics dashboard for feedback to support learning regulation. Computers in Human Behavior, 107, Article 105512. https://doi.org/10.1016/j.chb.2018.05.004
Šereš, L., Pavlićević, V., Petrović, G., Horvat, D., & Ivanišević, R. (2022). Learning analytics: Prospects and challenges. Strategic Management, 27(3), 48-55. https://doi.org/10.5937/StraMan2200020S
Shen, D., & Chang, C.-S. (2023). Implementation of the flipped classroom approach for promoting college students’ deeper learning. Educational Technology Research and Development, 71, 1323-1347. https://doi.org/10.1007/s11423-023-10186-4
Siemens, G., & Gašević, D. (2012). Special issue on learning and knowledge analytics. Educational Technology & Society, 15(3), 1-163.
Silva, J. C. S., Zambom, E., Rodrigues, R. L., Ramos, J. L. C., & de Souza, F. D. F. (2018). Effects of learning analytics on students’ self-regulated learning in flipped classroom. International Journal of Information and Communication Technology Education (IJICTE), 14(3), 91-107. https://doi.org/10.4018/IJICTE.2018070108
Sireci, S. G., & Faulkner-Bond, M. (2014). Validity evidence based on test content. Psicothema, 26(1), 100-107. https://doi.org/10.7334/psicothema2013.256
Staker, H., & Horn, M. B. (2012). Classifying K–12 blended learning. Innosight Institute. https://files.eric.ed.gov/fulltext/ED535180.pdf
Tempelaar, D., Bátori, A., & Giesbers, B. (2024) Understanding self-regulation strategies in problem-based learning through dispositional learning analytics. Frontiers in Education, 9, Article 1382771.https://www.doi.org/10.3389/feduc.2024.1382771
Ustun, A. B., Karaoglan Yilmaz, F. G., & Yilmaz, R. (2021). Investigating the role of accepting learning management system on students’ engagement and sense of community in blended learning. Education and Information Technologies, 26, 4751-4769. https://doi.org/10.1007/s10639-021-10500-8
Ustun, A. B., Zhang, K., Karaoglan-Yilmaz, F. G., & Yilmaz, R. (2022). Learning analytics-based feedback and recommendations in flipped classrooms: An experimental study in higher education. Journal of Research on Technology in Education, 55(5), 841-857. https://doi.org/10.1080/15391523.2022.2040401
Uzir, N. A., Gašević, D., Matcha, W., Jovanović, J., & Pardo, A. (2020). Analytics of time management strategies in a flipped classroom. Journal of Computer Assisted Learning, 36(1), 70-88. https://doi.org/10.1111/jcal.12392
Wang, D., & Han, H. (2021). Applying learning analytics dashboards based on process‐oriented feedback to improve students’ learning effectiveness. Journal of Computer Assisted Learning, 37(2), 487-499. https://doi.org/10.1111/jcal.12502
Wong, B. T.-M., Li, K. C., & Cheung, S. K. S. (2022). An analysis of learning analytics in personalised learning. Journal of Computing in Higher Education, 35, 371-390. https://doi.org/10.1007/s12528-022-09324-3
Yang, C. C. Y., Chen, I. Y. L., Akçapınar, G., Flanagan, B., & Ogata, H. (2021). Using a summarized lecture material recommendation system to enhance students’ preclass preparation in a flipped classroom. Educational Technology & Society, 24(2), 1-13. https://www.jstor.org/stable/27004927
You, J. W. (2015). Examining the effect of academic procrastination on achievement using LMS data in e-learning. Journal of Educational Technology & Society, 18(3), 64-74. https://www.jstor.org/stable/jeductechsoci.18.3.64
Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64-70. https://doi.org/10.1207/s15430421tip4102_2
Emine Cabı, Department of Computer and Instructional Technologies Education, Faculty of Education, Başkent University, Ankara, Türkiye
Dr. Emine CABI is an Associate Professor of Computer Education and Instructional at Education Faculty, Başkent University. Dr. Cabi gained her Ph.D. in Educational Technology at July, 2009. Her academic interest areas are distance learning, e-learning, instructional design, message design and use of technology in education. She has journal articles published in international indexes, book chapters and other national and international articles, papers submitted to international meetings.
Hacer Türkoğlu, Department of Mathematics and Science Education, Faculty of Education, Başkent University, Ankara, Türkiye
Dr.Hacer Türkoğlu is assistant professor of Mathematics and Science Education at Başkent University. Her main research interests are instructional design, instructional material development, distance education, educational technologies, learning management systems, social media and constructivist approaches in online environments.
© 2025. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.