Content area
The growing integration of generative artificial intelligence (GenAI) tools in higher education has potential to transform learning experiences. However, empirical research comparing GenAI-supported learning with traditional instruction lags behind these developments. This study addresses this gap through a controlled experiment involving 96 undergraduate computer science students in a Database Management course. Participants experienced either GenAI-supported or traditional instructions while learning the same concept. Data were collected through questionnaires, quizzes, and interviews. Analyses were grounded in self-determination theory (SDT), which posits that effective learning environments support autonomy, competence, and relatedness. Quantitative findings revealed significantly more positive learning experiences with GenAI tools, particularly enhancing autonomy through personalized pacing and increased accessibility. Competence was supported, reflected in shorter study times with no significant achievement differences between approaches. Students performed better on moderately difficult questions using GenAI, indicating that GenAI may bolster conceptual understanding. However, interviews with 11 participants revealed limitations in supporting relatedness. While students appreciated GenAI’s efficiency and availability, they preferred instructor-led sessions for emotional engagement and support with complex problems. This study contributes to the theoretical extension of SDT in technology-mediated learning contexts and offers practical guidance for optimal GenAI integration.
Full text
1. Introduction
The leaps made by artificial intelligence (AI), particularly in generative AI (GenAI), in recent years have begun to influence many sectors of society, including education. In higher education, the integration of GenAI tools represents nothing less than a transformation in how students engage with learning, particularly in self-regulated and hybrid instructional environments. GenAI tools such as ChatGPT and Claude offer unprecedented opportunities for personalized, on-demand learning support that challenge traditional instructional paradigms, forcing institutions to grapple with these evolving technological capabilities and their associated pedagogical demands (Adiguzel et al., 2023; ElSayary, 2024; Kurtz et al., 2024). However, academic institutions, faculty, and students are only beginning to understand and internalize the implications of these tools.
Recent studies have begun to explore the multifaceted impact of GenAI on student learning, revealing both promising opportunities and significant concerns. Studies highlight GenAI’s potential to support comprehension, provide instant feedback, and enhance metacognitive skill development (Chan & Hu, 2023; Usher & Amzalag, 2025). At the same time, they raise concerns about overreliance, reduced authenticity of learning, and the risk of passive consumption (Denny et al., 2024; Eke, 2023; Zviel-Girshin, 2024). A recent study by Kohen-Vacs et al. (2025), conducted in the context of programming education, further emphasized the tensions between students’ enthusiasm for GenAI tools and their difficulties in applying them to complex tasks such as debugging and evaluating AI-generated outputs. In light of this, the authors called on higher education instructors to adopt pedagogical models that combine learner autonomy with structured instructor support, an approach aligned with the blended design proposed in the present study. However, their study did not include a controlled comparison with traditional face-to-face instruction. Addressing this gap, the present study employs a controlled experimental design to compare GenAI-supported learning with instructor-led sessions in a real higher education lesson.
This study addresses this gap by conducting a controlled experiment comparing GenAI-supported learning with traditional instructor-led sessions in a mandatory undergraduate Database Management course. Based on a sample of 96 students, we investigated the impact of exposure to GenAI-supported learning on students’ learning experiences, academic performance, and engagement patterns compared to traditional classroom instruction. In designing our research, we were guided by pedagogical frameworks that emphasize motivation, self-regulated learning, and self-efficacy as underpinnings of effective learning. In particular, the frameworks we drew on are inspired by self-determination theory (SDT; Ryan & Deci, 2020)—the idea that human behavior is motivated by three basic psychological needs: for autonomy, competence, and relatedness. In SDT, autonomy refers to the learner’s perceived sense of volition and choice in the learning process; competence reflects a feeling of effectiveness and mastery when dealing with learning tasks; and relatedness captures a sense of social connection and belonging within the learning environment (Ryan & Deci, 2020; Niemiec & Ryan, 2009). Together, the degree to which these needs are met determines the extent to which learning experiences foster intrinsic motivation and sustained engagement.
Within this framework, the three constructs mentioned above—motivation (especially intrinsic motivation), self-regulated learning, and self-efficacy—serve as baseline variables through which the impact of GenAI-supported instruction can be explored. Self-regulated learning (SRL; Pintrich, 1991; Tekkol & Demirel, 2018) operationalizes how students plan, monitor, and evaluate their learning behaviors, while self-efficacy (Bandura, 1997) reflects learners’ belief in their ability to succeed in specific academic tasks. Intrinsic motivation, for its part, is the inner drive to complete a task because it is inherently satisfying, rather than to earn external rewards or avoid penalties. All three constructs are theoretically linked to SDT. Intrinsic motivation is inherently linked to autonomy and competence; self-regulated learning contributes to autonomy and competence by teaching learners how their personal decisions and choices (e.g., in time management or learning strategies) affect outcomes; and self-efficacy reinforces competence by fostering persistence and confidence.
We aimed to address two primary questions: In light of the frameworks and constructs mentioned above, (a) how do students’ learning experiences differ when using GenAI tools compared to traditional face-to-face instruction? (b) What differences in knowledge construction and academic performance might exist between these two approaches? Guided by these questions, we used both quantitative and qualitative data to glean insights into how the two learning modes might differentially help satisfy the need for autonomy, competence, and relatedness posited by SDT.
Our findings reveal a nuanced picture of the educational impact of GenAI. Students in the GenAI group reported significantly more positive learning experiences, particularly regarding personalization, pace, and time efficiency. In addition, the GenAI group demonstrated superior performance on moderately difficult quiz questions, suggesting particular effectiveness for mid-level conceptual understanding. However, the qualitative interviews revealed that while students appreciated GenAI’s efficiency and accessibility, they valued instructor-led sessions for more profound and emotionally resonant learning experiences, as well as complex problem-solving support.
This research makes both theoretical and practical contributions to the emerging field of AI-enhanced education. Theoretically, it extends self-determination theory and self-regulated learning frameworks by demonstrating how GenAI tools can enhance autonomy and competence, while also revealing limitations in supporting relatedness and higher-order cognitive scaffolding. Practically, it offers evidence-based guidance for educators seeking to integrate GenAI tools strategically rather than as wholesale replacements for traditional instruction. Our results suggest that optimal implementation involves using GenAI for targeted knowledge acquisition and technical content delivery, while preserving instructor-led sessions for collaborative, conceptual, and integrative learning tasks.
While we acknowledge that the pedagogical community is only at the beginning of this investigation, one of the most significant insights to emerge from our study-and our primary recommendation- is that GenAI should be viewed not as a replacement for human instruction, but as part of a complementary model that leverages the strengths of both approaches. This finding aligns with the insight that AI tools in academic settings represent a natural evolution of active learning methodologies, which have consistently shown positive impacts on student outcomes (Beimel et al., 2024).
2. Literature Review
Blended or hybrid learning, combining face-to-face instruction with digital tools, has been widely adopted by educational institutions in recent years, partly in response to the differing needs and preferences of a diverse student body (Garrison & Vaughan, 2008; Graham, 2006). In these hybrid models, the flexibility afforded by digital tools enables differentiated instruction, while the instructor’s role remains crucial for scaffolding deeper understanding and complex problem-solving (Deslauriers et al., 2019; Freeman et al., 2014).1
In such hybrid or self-regulated learning contexts—and, indeed, in higher education generally—the integration of GenAI tools has the potential to transform the student learning experience (Adiguzel et al., 2023; ElSayary, 2024). These tools, which include chatbots such as ChatGPT, offer instant feedback, personalized explanations, and an always-available learning companion, prompting both enthusiasm and skepticism among educators (Chan & Hu, 2023; Eke, 2023). On the positive side, recent research has explored how GenAI can support comprehension, metacognitive skill development, and technological fluency. On the negative side, studies highlight challenges in areas such as student agency, the authenticity of learning, and the risk of passive consumption (Bair & Bair, 2011; Fernandez et al., 2022). Capturing these challenges and opportunities, Denny et al. (2024) raise concerns about overreliance on AI-generated solutions in the domain of computing education. However, they also suggest that this shift could encourage pedagogical innovation focused on higher-order thinking and conceptual understanding.
Within this body of work, recent empirical research has begun to examine how generative AI tools affect students’ self-regulated learning and motivational experiences in higher education. Here, too, findings offer scope for both optimism and concern. For example, Usher and Amzalag (2025) found that students strategically use chatbots for academic writing support, combining curiosity with self-directed goal setting and iterative refinement. Their findings point to the potential for harnessing GenAI to scaffold SRL processes, such as planning and monitoring. Similarly, Kohen-Vacs et al. (2025) reported that integrating GenAI into programming education encouraged autonomous problem solving. At the same time, that study also revealed challenges in students’ ability to critically evaluate AI outputs, highlighting risks for self-regulated learning alongside motivational benefits. These findings resonate with the self-determination theory framework (Ryan & Deci, 2020), which emphasizes autonomy and competence as key drivers of intrinsic motivation, and they provide a conceptual basis for examining how GenAI-supported instruction may shape students’ motivational and self-regulatory experiences compared to traditional teaching.
Several scholars have outlined potential directions for integrating GenAI successfully into contemporary pedagogy. For example, Chan and Tsi (2023) offer a roadmap of practical and conceptual issues that may arise as educators come to grips with these new technologies, emphasizing the importance of guided integration rather than automation if AI is to complement rather than replace human educators. Rudolph et al. (2023) similarly explored ChatGPT’s role in reshaping assessment, teaching, and learning in higher education, highlighting both opportunities and risks, and offering practical guidance for institutions, educators, and students in leveraging AI effectively. Some studies also suggest a generational divide in receptiveness to these tools (Chan & Lee, 2023), indicating that effective implementation requires an understanding of learner identity and preferences.
As described above, the foundational frameworks of self-regulated learning and intrinsic motivation, grounded in self-determination theory (Ryan & Deci, 2020) and related empirical instruments (Pintrich, 1991; Tekkol & Demirel, 2018), emphasize autonomy, competence, and relatedness as core psychological needs that support effective learning and development. When these needs are satisfied, learners are more likely to engage in deep cognitive processing, construct knowledge actively, and achieve higher academic outcomes, as demonstrated in prior SDT research (Deci & Ryan, 2017; Niemiec & Ryan, 2009; Ryan & Deci, 2020). In GenAI-supported contexts, these needs may be engaged differently compared to traditional instruction, influencing motivation and perceived efficacy.
Stroet et al. (2013) and Leenknecht et al. (2021), among others, show that environments that support learner autonomy, including learner-directed pacing and tool choice, enhance motivation and engagement. However, the effectiveness of such environments depends on students’ self-regulatory skills and prior experience (Boelens et al., 2018; Jääskelä et al., 2021). This is particularly relevant in GenAI-based learning, where the adaptability of these new tools can foster engagement but also introduce cognitive overload or confusion (Yang & Zhang, 2024).
The current study builds on this literature by examining how the use of GenAI vs. instructor-led sessions affects students’ motivation, engagement, and academic performance in a real-world course. It aims to extend prior work by offering a controlled design that isolates the effect of instructional mode while accounting for learner variability.
In summary, prior studies highlight both the opportunities and challenges associated with integrating GenAI tools in higher education. However, there remains limited empirical evidence from controlled comparisons between GenAI-supported and instructor-led learning in real classroom settings. Building on the theoretical frameworks of SDT and SRL, the present study seeks to address this gap.
3. Research Objectives
This study examines the comparative effectiveness of GenAI-supported learning versus traditional face-to-face instructor-led learning, investigating how each mode affects students’ learning experiences, knowledge construction, and academic outcomes. The study addresses the following research questions, all guided by self-determination theory (i.e., the psychological need for autonomy, competence, and relatedness):
Research Sub-Question 1 (RQ1): Taking self-determination theory as a guide, how do students’ learning experiences differ when using generative AI tools compared to traditional face-to-face instruction?
Research Sub-Question 2 (RQ2): Again, taking self-determination theory as a guide, what, if any, differences in knowledge construction and academic performance exist between students using generative AI tools and those engaged in traditional instruction?
4. Methodology and Research Characteristics
4.1. Participants and Procedure
4.1.1. Overview of Study Procedure
The study had two main stages, taking place at the beginning (week 1) and towards the end (weeks 10–11) of a mandatory computer science course in which students were split into two instructional groups. In Stage 1 (week 1 of the course), all students completed a baseline questionnaire measuring motivation, self-regulated learning, and self-efficacy. During that week, and then in weeks 2–9, regular course sessions were conducted as usual, without any experimental manipulation.
In Stage 2 (weeks 10–11 of the course), the experimental manipulation was carried out. During those weeks, the two instructional groups learned the same material, but using different methods. In week 10, Group 1 learned that week’s topic through AI-supported self-regulated learning, while Group 2 learned the same topic in a traditional instructor-led session. In Week 11, the groups switched learning formats. The topics for both weeks were of comparable difficulty. This structure ensured that both groups experienced both instructional modes, as required for fairness and ethical balance.
Stage 2 data were collected at the end of each week via a quiz and questionnaire. For simplicity, our analysis focused primarily on the data from the first exposure (week 10).
4.1.2. Detailed Description of the Procedure
The study was conducted within the framework of a mandatory undergraduate course in database management, which is part of the second-year curriculum in a computer science program. The course consists of 13 weekly sessions, each lasting three academic hours. A total of 96 students participated in the study, comprising 70 men and 26 women, aged between 21 and 33 years (M = 24.5, SD = 2.2, Median = 24). Of these, nine students were excluded from the quantitative analysis due to incomplete questionnaires, resulting in an analytic sample of 87 students. The students were divided into two instructional groups, here referred to as Group 1 and Group 2. Group 1 (n = 55, 57.3%) included 18 women and 37 men, while Group 2 (n = 32, 33.3%) included seven women and 25 men. Group assignments were conducted randomly, primarily based on the department’s administrative registration process.
The controlled experiment was designed with the following protocol: In the first week of the study, we collected baseline data, including relevant psychometric measures. During week 10, the two instructional groups were randomly assigned to one of two experimental conditions: an experimental group that received GenAI-supported instruction or a control group that participated in traditional instructor-led classroom sessions. Post-intervention data were drawn from two primary sources: a questionnaire assessing participants’ learning experience and satisfaction, followed by a knowledge and understanding assessment (i.e., a quiz) comprising six multiple-choice questions stratified by difficulty (two easy, two moderate, and two challenging items).2 Additionally, qualitative interviews were conducted with some participants at the end of the course.
The experimental protocol consisted of the following stages: In the first week of the semester, all students were asked to complete a baseline questionnaire. The purpose of this instrument was to collect demographic data and baseline measures of the core variables underpinning SDT (motivation, self-regulated learning, and self-efficacy). In week 10 of the course, the instructional topic was “Group By” clauses in Structured Query Language (SQL) queries. The experiment, conducted in the second half of each session (approximately 75 min), followed this outline: Students in Group 1 were instructed to perform the following tasks: Independently study the topic, supported by a GenAI tool of their choice. Students received a brief overview of the required content, a set of defined learning objectives, and a suggested initial prompt to begin interacting with the tool. They then engaged in self-regulated learning using their selected GenAI tool. Upon completing the self-study, they filled out a questionnaire (the AI Questionnaire) that included both closed- and open-ended questions (see Research Tools below for more details). After completing the questionnaire, they took a short quiz designed to assess their understanding of the topic (see below for more details). Students in Group 2 were instructed to perform the following tasks: Study the same topic (i.e., “Group By” clauses) within the same timeframe through traditional in-class instruction delivered by the course lecturer. At the end of the in-class session, they filled out a questionnaire (the Lecturer Questionnaire), which also included both closed- and open-ended questions (see below). The Lecturer Questionnaire is essentially the same as the AI questionnaire, adapted for lecturers instead of AI. After completing the questionnaire, they took the same quiz as Group 1.
All sessions took place in a standard classroom environment; students were instructed to bring personal laptops for use during self-regulated learning activities. Table 1 summarizes the experiment outline.
To meet ethical requirements, in week 11, the two groups swapped instructional formats (i.e., Group 1 participated in lecturer-led learning, while Group 2 engaged in AI-supported learning), then completed the relevant questionnaires and quizzes. To ensure fair comparison, the two lessons introduced new content of comparable difficulty. Analysis of the week 1 (baseline) questionnaire reveals no significant differences between the two groups in motivation, self-regulated learning, and self-efficacy, indicating a balanced allocation and an absence of selection bias (for details, see the beginning of the Section 5). Hence, for simplicity, the present study reports data only for week 10.
In the present study, of the 96 participants, 24 participants were removed from the analysis: 9 (9.4%) who did not complete the week 1 questionnaire, and 15 (15.6%) who did not complete the week 10 questionnaire. The final sample for analysis thus comprises only the 72 participants (75%) who supplied complete data for weeks 1 and 10.
4.2. Research Tools
4.2.1. Questionnaires
We used three questionnaires: (a) Baseline (week 1), (b) the AI Questionnaire (week 10), and (c) the Lecturer Questionnaire (week 10). All three questionnaires collected demographic information (e.g., age and gender) and 28 items measuring the three key SDT variables: motivation, self-regulated learning, and self-efficacy. The motivation scale comprised seven items (e.g., “I prefer to learn content that challenges me”), based on Pintrich (1991); the self-regulated learning scale had fourteen items (e.g., “I study on my own content that interests me”), based on Tekkol and Demirel (2018); and self-efficacy was measured through seven items (e.g., “I am confident that I can understand the most complicated material in this course”), also based on Pintrich (1991). Table 2 presents the respective subscales included in each of these three main scales. For all items, respondents rated their level of agreement on a five-point Likert scale, where 1 = completely disagree and 5 = strongly agree, indicating higher scores correspond to higher levels of the three academic variables.
In addition to the 28 core items, the AI and Lecturer questionnaires included 15 items related to the learning experience and satisfaction. These items were developed specifically for this study, drawing on items from the Distance Education Learning Environments Survey (Walker, 2005) and on the instructional technology acceptance framework discussed by Liaw and Huang (2013).
Of these, six questions addressed the learning experience (e.g., “Using AI-based chat improved my learning” or “The face-to-face lesson improved my learning,” respectively) and nine questions measured satisfaction (e.g., “I am satisfied with the information provided by the AI-based chat” or “I am satisfied with the quality of the information I received in the frontal lecture,” respectively).
Finally, the AI and Lecturer questionnaires each included four open-ended items (e.g., “Please describe the advantages of learning content using AI-based chat for you”; “Please describe the advantages of learning content through frontal lessons for you”). Thus, the AI and Lecturer questionnaires each totaled 47 items, with the only difference between them lying in the phrasing of the learning experience and satisfaction items.
Reliability analyses were conducted at week 1 and again at week 10. For the 28 core items, Table 2 presents the internal consistency (Cronbach’s alpha) values for each subscale and total scale at both measurement points. For the learning experience scale, Cronbach’s alpha was 0.719 and 0.852 for the Lecturer and AI Questionnaires, respectively; for the satisfaction scale, these values were 0.809 and 0.838. When considered as a combined scale of 15 items (learning experience and satisfaction together), Cronbach’s alpha was 0.853 for the Lecturer condition and 0.903 for the AI condition.
The complete set of questionnaires and quizzes is available on request.
4.2.2. Quizzes
At the end of week 10, both groups completed the same short quiz focusing on the “Group By” topic covered that week. The quiz consisted of six multiple-choice questions, including two at an easy difficulty level, two at a moderate difficulty level, and two at a challenging difficulty level. Difficulty levels were determined by the class instructors (including two authors of this paper) based on students’ performance in previous classes. An illustrative example of one quiz item is provided in Figure 1. The complete week 10 quiz is available on request.
It is important to note that the quiz was not intended as a pre–post measure of learning progress. Rather, it served as a between-group assessment tool, designed to compare students’ understanding of the same instructional content (the “Group By” topic) immediately following two different learning conditions—AI-supported versus instructor-led. This design allows a controlled comparison of learning outcomes attributable to the instructional mode rather than to prior knowledge or cumulative learning over time.
4.2.3. Interviews
Towards the end of the course, we conducted personal interviews with 11 students who had participated in the study (from both Groups 1 and 2). The purpose of these interviews was to elicit qualitative feedback to complement the quantitative data collected through the questionnaires. They enabled us to gain deeper insights into students’ perceptions, challenges, and preferences regarding the two learning formats.
To recruit the interviewees, an open invitation was sent to all students participating in the course, and the first volunteers from each group were selected. All interviewees provided consent for the information they shared to be used for research purposes, with full assurance of anonymity. The interviewer was not acquainted with the students and was not part of the teaching staff for the course.
The interviews were conducted via Zoom and lasted approximately 15 min each. During the interviews, students were asked to reflect on their experiences related to the classes held in weeks 10 and 11, comparing face-to-face learning with a lecturer to self-directed learning supported by AI tools.
4.3. Time-on-Task Measure
In addition to the questionnaires, quizzes, and interviews described above, we collected a behavioral measure of engagement: time-on-task. Specifically, students in the AI-supported group were asked to report the duration of their study session prior to the quiz. In contrast, all students in the instructor-led group attended a scheduled 50-min face-to-face session. This enabled a cross-group comparison of the time invested in learning the same instructional content under different learning conditions.
4.4. Data Analysis
To answer RQ1, we explored students’ learning experience in terms of perceived self-efficacy, general understanding, satisfaction, and interaction with the learning material. The relevant research tools included the AI and Lecturer questionnaires and personal interviews.
To answer RQ2, we examined the impact of the learning method on knowledge acquisition, problem-solving ability, and academic achievement. The questionnaires were used to assess participants’ perceived knowledge and self-efficacy, while the quiz results measured their academic performance.
4.5. Ethical Considerations
The study was granted ethical approval by the Institutional Review Board of the academic institution where it was conducted. Participation in the study was entirely voluntary, and students were awarded academic credit as an incentive for their involvement. Participants were informed that the collected data would be used exclusively for research purposes, and strict measures were taken to ensure anonymity. To maintain confidentiality while allowing for the linkage of responses, each participant was assigned a unique four-digit code. Additionally, the names of the participants in the personal interviews were anonymized. In what follows, when quoting from interviews, we refer to participants by their first two consonants of their given names and their gender (e.g., TB [male], YR [female]).
5. Findings
5.1. Quantitative Findings
As noted above, before addressing the main research questions, we conducted a preliminary analysis to assess the equivalence of the two study groups on key baseline measures at the beginning of the academic year. The comparison was based on responses from Week 1 regarding motivation, self-regulated learning, and self-efficacy. An independent samples t-test revealed no statistically significant differences between the groups in most of the scales and subscales. Specifically, no significant differences were found in intrinsic motivation (t(70) = −0.709, p = 0.481), extrinsic motivation (t(70) = −0.044, p = 0.965), or overall motivation (t(70) = −0.330, p = 0.742). Likewise, no significant differences were found in the self-efficacy measures: expectation of success (t(34.7) = −1.453, p = 0.155), perceived ability (t(70) = −1.324, p = 0.190), and overall self-efficacy (t(70) = −1.630, p = 0.108). With respect to self-regulated learning (SRL), no differences were found in planning and organization (t(70) = −0.317, p = 0.752), learning strategies (t(70) = 0.575, p = 0.567), or overall SRL (t(70) = 0.595, p = 0.554). A marginal difference emerged in SRL–motivation, with higher scores for the Lecturer group (M = 4.31, SD = 0.54) compared to the AI group (M = 4.05, SD = 0.58), t(70) = 1.855, p = 0.068. However, the effect size was small (d = 0.45), and the result did not reach statistical significance. These findings support the baseline equivalence of the two groups and strengthen the internal validity of subsequent comparisons.
We next addressed the research questions. RQ1 inquired about the differences in students’ learning experiences when using generative AI tools versus traditional face-to-face instruction. For this purpose, we compared students’ responses on the week 10 learning experience and satisfaction measures between the Lecturer group and the AI group.
As summarized in Table 3, students in the AI group reported a significantly more positive learning experience than those in the Lecturer group, with a large effect size. No significant difference was found between the groups in terms of satisfaction.
RQ2 concerned the differences in knowledge construction and academic performance between students using generative AI tools and those engaged in traditional instruction. To address this, we examined students’ performance in the short quiz administered at the end of the week in which the material being tested was taught (week 10). The quiz included six items of varying difficulty (easy, moderate, and difficult), and performance was compared between the two instructional groups. Since the distribution of total quiz scores was not normal in either group (Shapiro–Wilk test, p < 0.01), a nonparametric Mann–Whitney U test was conducted. No statistically significant difference was found in total quiz scores between the groups (U = 820.50, p > 0.05).
To gain deeper insight, performance was further examined in terms of question difficulty. Table 4 presents the success rates (%) for each group across the three levels of question difficulty.
As shown in Table 4, both groups performed similarly on easy and difficult questions, with no significant differences observed. However, on the moderate-difficulty questions, the AI group outperformed the Lecturer group (68.5% vs. 45.2%), with a statistically significant difference (p = 0.034) corresponding to a 23.3% gap in performance.
5.2. Qualitative Findings
The qualitative analyses reveal complexities that are not directly reflected in the quantitative findings.4
Looking first at RQ1, although AI-supported learning received significantly higher scores on quantitative measures such as comfort, quality, and contribution to learning, most participants described the learning experience with the instructor as more emotionally and cognitively fulfilling. In general, interviewees linked learning with an instructor to a sense of depth, security, and personal connection. When discussing learning with AI, they highlighted its efficiency and accessibility, but also the possibility of information overload or confusion.
Participants who favored instructor-led learning often referred to the interactive and structured nature of the classroom experience. For example, SL (female) remarked that she “enjoys it much more when the teacher is teaching… she shares her thinking process, and then we build the answer together, step by step.” Similarly, NM (female) explained that “with a teacher… I pay more attention… even if I zone out for a moment, I can come back and still stay with her.”
Some students appreciated the AI’s ability to deliver quick, streamlined explanations, allowing faster access to information and more focused engagement with the material. NM (female) noted that with AI-supported learning, “in just 40 min I was done, compared to the lecture which could take more than two hours.” MA (female) remarked, “It was very focused… without all the sidetracks that happen with teachers.” DA (male) described the difference in terms of modern efficiency: “It surprisingly shortens time. It’s like texting instead of sending a letter—you get what you need without the hassle.” TB (male) added, “It summarizes a two-hour lecture in three pages. That’s pretty wild. It saves time and makes things more efficient.” Correspondingly, participants noted that classroom learning, while potentially rich, was prone to distractions or inefficient use of time. For instance, MA (female) pointed out that with teachers, “sometimes the train of thought goes off track,” and SL (female) mentioned that lessons often “drift into small talk or anecdotes… which can help but do take up time.” LL (male) added that in a classroom setting, “I avoid asking questions so I don’t interrupt… with AI it’s just me and the tool, no one else to consider.”
Our findings on study time bear out these qualitative observations. Recall that we further examined differences in engagement between the two instructional methods by comparing the amount of time students spent in their last study session before the quiz. All students in the lecturer group attended a scheduled 50-min face-to-face session, while the AI group engaged in self-directed study. The AI group reported an average study duration of 36.26 min (SD = 10.2), with a 95% confidence interval of [33.36, 39.16]. A one-sample t-test comparing this mean to the fixed 50-min session revealed a statistically significant difference, t(49) = −9.53, p < 0.001, with a large effect size (Cohen’s d = −1.35). These findings suggest that students using AI-based learning completed their preparation in significantly less time.
However, the time savings afforded by AI were not an unmixed blessing. Some students expressed frustration with the lack of direction and the overwhelming nature of some AI-generated content. YR (female) commented that “it gives you things we didn’t even learn, combines material… it kind of thinks ahead… and that confused me.” TB (male) noted that “it gives an immediate answer… but sometimes you just don’t know what’s important and what’s not.”
This pattern was echoed when we examined the qualitative accounts through the lens of RQ2. Many students described AI as an efficient tool for acquiring straightforward or technical knowledge. For example, DA (male) explained that “if it’s something small and technical, I’d definitely go with AI—it’s fast and gets you what you need.” TB (male) similarly noted that “AI is great for learning definitions or short explanations… but when it comes to edge cases or harder questions, I’m not sure it can help.” As such, participants tended to rely more on instructor-led learning when they faced unfamiliar or abstract material. AA (male) stated that “when I learn with the teacher, I feel she prepares me better for the tricky parts of the exam… things I wouldn’t think to ask about.” YR (female) added that “with the teacher, I understood what would be on the test. With the AI, I wasn’t sure if I had covered the right things.”
Overall, the qualitative findings offer important nuances that are sometimes lacking in the quantitative findings. In terms of the learning experience, AI was often viewed as a functional and effective tool, especially when students had some prior understanding of the content. However, the instructor was perceived as fostering a deeper, more emotionally resonant, and experientially rich learning process, even if standard quantitative metrics do not always capture these. In terms of academic outcomes, while AI-based learning may provide clear advantages for foundational material, it can fall short in supporting higher-order thinking or exam preparation in more complex domains. These findings support the idea that combining instructional approaches may provide the most effective path, using AI to solidify basic understanding and human instruction to guide deeper learning.
6. Discussion
The controlled experiment reported in this paper provides compelling evidence that generative AI tools and traditional face-to-face instruction each offer distinct advantages in higher education contexts. Specifically, the findings reveal that while AI-supported learning enhances perceived learning experience, efficiency, and performance on moderately difficult tasks, instructor-led sessions remain crucial for deeper engagement and complex problem-solving. These findings challenge simplistic narratives of AI as either an educational panacea or a threat, instead pointing toward the need for strategic integration that leverages the strengths of both modalities.
First and foremost, it is important to underscore that the two instructional groups examined in our study were statistically equivalent at baseline. No significant differences were found in motivation, self-regulated learning, or self-efficacy measures at the beginning of the semester. This reinforces the internal validity of our findings, supporting the interpretation that any observed differences in learning experience and performance were likely a result of the instructional method itself rather than pre-existing differences between the students.
Second, most quantitative measures revealed no statistically significant differences between GenAI-supported and instructor-led instruction. This finding by itself is theoretically and practically meaningful. The absence of differences in performance, coupled with the significantly shorter time-on-task in the GenAI group, suggests that GenAI can support learning outcomes comparable to traditional instruction while offering greater flexibility and efficiency. This aligns with emerging evidence that GenAI tools can complement, rather than replace, instructor-led teaching by providing autonomous learning opportunities without compromising academic achievement. Moreover, the lack of overall differences should not be interpreted as evidence of no effect, but rather as an indication that the two modes may engage students differently, with qualitative data highlighting distinctions in motivation, relatedness, and perceived support. Together, these patterns underscore the relevance of examining how GenAI changes the learning experience, rather than expecting uniformly higher scores.
Overall, the results suggest that AI-supported learning environments offer a distinct advantage in terms of perceived learning experience, particularly in areas related to personalization, pace, and accessibility. Students in the AI group reported higher comfort levels, high perceived quality of information, and a more efficient use of time. Yet, importantly, looking separately at the easy, moderate, and difficult quiz questions reveals an intriguing pattern: no significant differences in performance were found for easy or difficult items, but the AI group outperformed the Lecturer group on moderately difficult quiz questions. These findings imply that AI may be particularly effective in consolidating mid-level conceptual understanding, but less impactful at the extremes of difficulty, due perhaps to the simplicity of the content at one end, and the need for higher-order cognitive scaffolding at the other. They thus provide compelling evidence for a differential impact of AI-supported versus instructor-led learning.
The performance gap on questions of moderate difficulty observed in our study also aligns with previous research on effective teaching and learning. According to Crocker and Algina (2006) and Embretson and Reise (2000), items with moderate difficulty tend to offer the highest discrimination power, effectively differentiating between students of higher and lower performance levels. These mid-range questions are also exceptionally informative when evaluating instruction, as they are most sensitive to the learning gains that stem from effective teaching (Popham, 2007). This aligns with Anderson and Krathwohl’s (2001) revision of Bloom’s Taxonomy, which emphasizes that instructional interventions are most impactful at the application and analysis levels of cognitive processing—levels typically targeted by moderately difficult assessment items. While easy questions often reflect surface-level recall and hard questions may exceed what was taught, it is the moderate ones that best capture the zone of proximal development, which may be influenced by instructional strategies.
Turning to the qualitative findings, the disconnect between positive AI ratings and students’ overall preference for instructor-led learning reveals a fundamental paradox in the adoption of educational technology—namely, the fact that immediate satisfaction metrics may not fully capture deeper aspects of meaningful learning. Students may be drawn to AI’s convenience and efficiency while simultaneously recognizing the irreplaceable value of human connection and contextual understanding that instructors can provide. Our qualitative findings from the student interviews support this narrative: many participants appreciated the AI’s efficiency but highlighted its limitations in guiding deeper, more contextualized learning. The instructor, by contrast, was often perceived as emotionally engaging and pedagogically grounding, especially in moments of confusion or complexity. This dichotomy aligns with broader theories of blended learning and self-determination theory (Ryan & Deci, 2020), suggesting that while AI tools can enhance autonomy and competence, the relatedness and mentoring functions of human instructors remain irreplaceable.
Another important insight concerns time-on-task. Students using AI reported significantly shorter study durations, yet did not demonstrate diminished comprehension or lower quiz performance. This raises compelling pedagogical questions. In particular, are shorter, focused AI sessions truly more efficient, or do they risk sacrificing long-term retention and critical thinking for immediate convenience?
In this respect, while students frequently described AI as a faster and more concise learning resource, this perception should be interpreted cautiously. From an ethical and pedagogical standpoint, the notion of “efficiency” in learning does not necessarily equate to educational quality or depth of understanding. Moreover, relying on AI to condense information raises concerns regarding critical engagement, authorship, and intellectual integrity (Eke, 2023). Thus, although participants viewed AI’s speed as an advantage, it must be balanced against these broader ethical considerations. These trade-offs merit further exploration, particularly in courses that build upon cumulative knowledge or require ethical reasoning and reflective practice.
Taken together, our findings suggest that AI tools are best viewed not as replacements for traditional teaching but as complementary assets. Their optimal use may lie in reinforcing basic understanding and offering flexible, student-paced exploration, while human-led instruction remains vital for complex problem-solving, adaptive feedback, and emotional resonance. The findings also have important implications for how we measure and evaluate educational innovations, offering the crucial insight that conventional quantitative satisfaction metrics cannot, by themselves, fully assess either satisfaction or learning.
6.1. Theoretical Contributions
This study contributes to the emerging body of literature on the pedagogical integration of GenAI in higher education by providing empirical evidence from a controlled field experiment. By comparing AI-supported and lecturer-led instruction within the same student population and instructional content, the study isolates the impact of learning modality on student experience, motivation, and academic performance. Our findings reinforce and extend self-determination theory (Ryan & Deci, 2020) and self-regulated learning frameworks (Pintrich, 1991; Tekkol & Demirel, 2018) by demonstrating how GenAI environments can enhance perceptions of autonomy and competence, while also exposing challenges related to cognitive overload and content coherence. Notably, the findings highlight the trade-offs between efficiency and depth, underscoring that instructional mode interacts with the cognitive demands of the learning task. These results advance our theoretical understanding of how AI tools function not only as content providers but also as mediators of learner agency and self-regulation.
6.2. Practical Contributions
For educators and instructional designers, this study offers actionable insights into how GenAI tools can be meaningfully incorporated into hybrid or blended learning environments. The results show that AI-supported instruction is especially well-suited for delivering technical, mid-level content in a time-efficient manner, making it a valuable option for self-paced review or flipped classroom models. However, the qualitative data underscores the continued value of human instructors for building trust, addressing ambiguity, and facilitating deep learning. Practitioners should therefore consider a strategic integration: using GenAI for targeted knowledge acquisition, and reserving instructor-led sessions for collaborative, conceptual, or integrative tasks.
At the institutional level, the findings suggest several important considerations for policy and practice. We note here implications in four main domains: Faculty development. Universities should invest in training programs that help instructors understand when and how to effectively integrate AI tools, rather than viewing them as competitive threats. Curriculum design. Course structures should be reimagined to optimize the strengths of both modalities, with AI supporting foundational knowledge building and human instruction focusing on higher-order thinking and application. Assessment reform. Traditional assessment methods may need revision to account for AI’s differential effectiveness across cognitive levels and to ensure authentic evaluation of student learning. Technology infrastructure. Institutions must consider the implications of allowing students to choose their preferred AI tools versus standardizing platforms for consistency and evaluation purposes.
6.3. Limitations
Several limitations must be acknowledged. First, although we confirmed baseline equivalence, the sample came from a single computer science course, limiting the generalizability of the findings. The overall sample size was modest and the two groups were uneven, which may reduce the statistical power of the findings. Some constructs examined, such as learning experience and motivation, are also challenging to measure with high reliability, which could further limit the strength of the conclusions drawn. Consequently, the findings should be interpreted with caution, and broad recommendations for university curricula or technical infrastructure would be premature. Moreover, because this study is discipline-specific, the results should not be assumed to apply broadly across other academic fields. The technical nature of database management may have particularly favored AI-supported learning; outcomes could differ substantially in disciplines such as the humanities or social sciences, where critical thinking and cultural interpretation play a more significant role.
Second, the duration of each learning activity was limited to a single session per instructional mode. Longer-term exposure to each condition might yield different outcomes.
Third, the AI tools used were not restricted to a single platform (students could choose their preferred chatbot). While this has advantages, allowing students to use the tools with which they were most comfortable, it also introduces significant variability in the AI experience that was not systematically controlled. Different AI platforms may have varying capabilities, interfaces, and response qualities that could impact learning outcomes. Future research could reduce this variability by standardizing the AI tool used or by intentionally comparing selected tools under controlled conditions to examine how platform differences influence learning.
Fourth, our study did not control for students’ prior experience with AI tools, which may have influenced both their comfort level and effectiveness in using these technologies. Students with greater AI fluency may have achieved better outcomes simply due to their familiarity rather than the inherent superiority of the learning method.
Finally, quiz performance was assessed immediately after instruction. This design does not capture long-term effects, such as knowledge retention or sustained impact on learning outcomes. Delayed post-tests could provide more robust insights into long-term retention and understanding.
6.4. Future Research Directions
In light of the limitations mentioned above, future studies should expand the present line of inquiry by replicating the experimental design across diverse academic fields and learner populations. Research could also examine longitudinal effects of sustained AI-supported instruction on academic outcomes, self-regulation skills, and student attitudes toward learning. In terms of the tool itself, a comparative analysis of various GenAI tools, using standardized prompts and scaffolding levels, could help establish best practices for instructional design in AI-enhanced environments.
In addition, several specific research questions emerge from our findings. The first concerns transfer effects: How well does knowledge acquired through AI support transfer to novel problems and contexts compared to instructor-mediated learning? A second question relates to individual differences: What student characteristics (prior knowledge, learning preferences, digital literacy) moderate the effectiveness of AI versus traditional instruction? A third concern is optimal integration models: What specific combinations of AI-supported and human-led instruction maximize learning outcomes in different domains and at various cognitive levels? Finally, further exploration is warranted into the differential effectiveness of AI with respect to different learning objectives, such as factual recall, conceptual transfer, and critical reasoning, and how these outcomes vary with learner characteristics, including prior knowledge, digital literacy, and motivation profile.
Ultimately, as educational institutions continue to navigate the complexities of the AI revolution, they need empirical evidence to inform strategic decisions about pedagogical innovation in the digital age. The present study takes a step toward meeting this need. The implications extend beyond individual classroom practices to institutional policy development and faculty training initiatives.
Conceptualization, D.B., M.A., R.Z.-G., and N.V.; methodology, D.B., M.A., R.Z.-G., and N.V.; validation, M.A., R.Z.-G., and N.V.; formal analysis, D.B., M.A., R.Z.-G., and N.V.; investigation, D.B., M.A., R.Z.-G., and N.V.; resources, D.B., M.A., R.Z.-G., and N.V.; writing—original draft preparation, D.B., M.A., R.Z.-G., and N.V.; writing—review and editing, D.B., M.A., R.Z.-G., and N.V. All authors have read and agreed to the published version of the manuscript.
The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of Ruppin Academic Center (protocol code 245, dated 10 November 2024).
Informed consent was obtained from all subjects involved in the study.
The data presented in this study are available upon request from the corresponding author, as they are subject to privacy preservation.
The authors declare that they have no conflicts of interest.
Footnotes
1. A further consideration is whether learning with digital tools (including remote learning with human instructors) employs synchronous or asynchronous modalities. Studies comparing synchronous and asynchronous modalities (
2. In week 11, the two groups swapped instructional formats to ensure that each participant had exposure to both learning methods (i.e., the group that experienced AI-supported learning in week 10 received traditional lecturer-led instruction in week 11, and vice versa). The questionnaires and quiz administered at the end of week 10 were repeated at the end of week 11, with the latter updated to account for the new topic taught. As the two groups were statistically identical, for simplicity, the present study reports on the data for week 10. The data for week 11 are available from the authors upon request.
3. One item from the intrinsic motivation subscale (“I prefer tasks that allow me to learn, even if I don’t get a high grade”) was removed due to low reliability of that subscale at week 1 (α = 0.41) when the item was included. The table and text report the number of items remaining after that exclusion. Although internal consistency for the intrinsic motivation subscale remained modest after that exclusion (α = 0.64 at week 1 and α = 0.60 at week 10), these values are considered acceptable for short, three-item scales, particularly in exploratory or experimental studies (
4. As a reminder, while the quantitative data reported in this paper relate only to week 10 of the course (and so reflect a between-subjects design), the interviews took place at the end of the course, after the two groups swapped places in week 11. Hence, all the students interviewed had experienced both face-to-face and AI-supported instruction.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 An example of a question on the topic of “Group By” clauses in SQL queries.
Experiment Outline.
| Week 10 | Group 1 | Group 2 |
|---|---|---|
| Type of learning | AI-supported learning | Lecturer-led learning |
| Type of questionnaire | AI Questionnaire | Lecturer Questionnaire |
| Quiz topic | “Group By” clauses | “Group By” clauses |
Internal Consistency (Cronbach’s Alpha) for the Core Research Scales at the Week 1 and Week 10 Time Points.
| Scale | Source | # of Items | Week 1 α | Week 10 α |
|---|---|---|---|---|
| Intrinsic Motivation | 3 | 0.637 | 0.597 | |
| Extrinsic Motivation | 4 | 0.775 | 0.743 | |
| General Motivation | 7 | 0.735 | 0.709 | |
| SRL—Motivation | 3 | 0.706 | 0.669 | |
| SRL—Planning and Organization | 5 | 0.710 | 0.784 | |
| SRL—Learning Strategies | 6 | 0.794 | 0.729 | |
| General SRL | 14 | 0.831 | 0.849 | |
| Self-Efficacy—Expectation of Success | 3 | 0.897 | 0.804 | |
| Self-Efficacy—Perceived Ability | 4 | 0.793 | 0.838 | |
| General Self-Efficacy | 7 | 0.893 | 0.907 |
Notes. SRL = self-regulated learning, # = number, α = Cronbach’s Alpha.
Comparison of Learning Experience Measures between Groups (Week 10).
| Measure | Lecturer Group | AI Group | t | df | p | Cohen’s d | ||
|---|---|---|---|---|---|---|---|---|
| M | SD | M | SD | |||||
| Learning experience | 3.47 | 0.52 | 4.05 | 0.70 | −4.05 | 85 | <0.001 | −0.9 |
| Satisfaction | 3.45 | 0.53 | 3.67 | 0.65 | −1.609 | 85 | 0.111 | −0.36 |
Note. Negative values of Cohen’s d indicate that the AI group had higher scores.
Success Rates by Group and Question Difficulty Level.
| Question Difficulty | AI Group (%) | Lecturer Group (%) | U | p |
|---|---|---|---|---|
| Easy | 38.9 | 41.9 | 751.50 | >0.05 |
| Moderate | 68.5 | 45.2 | 629.50 | 0.035 |
| Difficult | 35.5 | 33.3 | 822.00 | >0.05 |
Adiguzel, T.; Kaya, M. H.; Cansu, F. K. Revolutionizing education with AI: Exploring the transformative potential of ChatGPT. Contemporary Educational Technology; 2023; 15,
Anderson, L. W.; Krathwohl, D. R. A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives; Allyn & Bacon: 2001.
Bair, D. E.; Bair, M. A. Paradoxes of online teaching. International Journal for the Scholarship of Teaching and Learning; 2011; 5,
Bandura, A. Self-efficacy: The exercise of control; Freeman: 1997; Vol. 11.
Beimel, D.; Tsoury, A.; Barnett-Itzhaki, Z. The impact of extent and variety in active learning methods across online and face-to-face education on students’ course evaluations. Frontiers in Education; 2024; 9, 1432054. [DOI: https://dx.doi.org/10.3389/feduc.2024.1432054]
Boelens, R.; Voet, M.; De Wever, B. The design of blended learning in response to student diversity in higher education: Instructors’ views and use of differentiated instruction in blended learning. Computers & Education; 2018; 120, pp. 197-212. [DOI: https://dx.doi.org/10.1016/j.compedu.2018.02.009]
Chan, C. K. Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education; 2023; 20,
Chan, C. K. Y.; Lee, K. K. The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers?. Smart Learning Environments; 2023; 10,
Chan, C. K. Y.; Tsi, L. H. The AI revolution in education: Will AI replace or assist teachers in higher education?. arXiv; 2023; [DOI: https://dx.doi.org/10.48550/arXiv.2305.01185] arXiv: 2305.01185
Cortina, J. M. What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology; 1993; 78,
Crocker, L.; Algina, J. Introduction to classical and modern test theory; Cengage Learning: 2006.
Deci, E. L.; Ryan, R. M. Self-determination theory: Basic psychological needs in motivation, development, and wellness; Guilford Press: 2017.
Denny, P.; Prather, J.; Becker, B. A.; Finnie-Ansley, J.; Hellas, A.; Leinonen, J.; Luxton-Reilly, A.; Reeves, B. N.; Santos, E. A.; Sarsa, S. Computing education in the era of generative AI. Communications of the ACM; 2024; 67,
Deslauriers, L.; McCarty, L. S.; Miller, K.; Callaghan, K.; Kestin, G. Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. Proceedings of the National Academy of Sciences; 2019; 116,
Eke, D. O. ChatGPT and the rise of generative AI: Threat to academic integrity?. Journal of Responsible Technology; 2023; 13, 100060. [DOI: https://dx.doi.org/10.1016/j.jrt.2023.100060]
ElSayary, A. Integrating generative AI in active learning environments: Enhancing metacognition and technological skills. Journal of Systemics, Cybernetics and Informatics; 2024; 22,
Embretson, S. E.; Reise, S. P. Item response theory for psychologists; Lawrence Erlbaum Associates: 2000.
Fabriz, S.; Mendzheritskaya, J.; Stehle, S. Impact of synchronous and asynchronous settings of online teaching and learning in higher education on students’ learning experience during COVID-19. Frontiers in Psychology; 2021; 12, 733554. [DOI: https://dx.doi.org/10.3389/fpsyg.2021.733554]
Fernandez, C. J.; Ramesh, R.; Manivannan, A. S. R. Synchronous learning and asynchronous learning during COVID-19 pandemic: A case study in India. Asian Association of Open Universities Journal; 2022; 17, pp. 1-14. [DOI: https://dx.doi.org/10.1108/AAOUJ-02-2021-0027]
Freeman, S.; Eddy, S. L.; McDonough, M.; Smith, M. K.; Okoroafor, N.; Jordt, H.; Wenderoth, M. P. Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences; 2014; 111,
Garrison, D. R.; Vaughan, N. D. Blended learning in higher education: Framework, principles, and guidelines; Jossey-Bass Publishers: 2008.
Graham, C. R. Bonk, C. J.; Graham, C. R. Blended learning systems: Definition, current trends, and future directions. The handbook of blended learning: Global perspectives, local designs; Pfeiffer Publishing: 2006; pp. 3-21.
Jääskelä, P.; Heilala, V.; Kärkkäinen, T.; Häkkinen, P. Student agency analytics: Learning analytics as a tool for analysing student agency in higher education. Behaviour & Information Technology; 2021; 40,
Kohen-Vacs, D.; Usher, M.; Jansen, M. Integrating generative AI into programming education: Student perceptions and the challenge of correcting AI errors. International Journal of Artificial Intelligence in Education; 2025; pp. 1-19. [DOI: https://dx.doi.org/10.1007/s40593-025-00496-4]
Kurtz, G.; Amzalag, M.; Shaked, N.; Zaguri, Y.; Kohen-Vacs, D.; Gal, E.; Zailer, G.; Barak-Medina, E. Strategies for integrating generative AI into higher education: Navigating challenges and leveraging opportunities. Education Sciences; 2024; 14,
Leenknecht, M.; Wijnia, L.; Köhlen, M.; Fryer, L.; Rikers, R.; Loyens, S. Formative assessment as practice: The role of students’ motivation. Assessment & Evaluation in Higher Education; 2021; 46,
Liaw, S. S.; Huang, H. M. Perceived satisfaction, perceived usefulness and interactive learning environments as predictors to self-regulation in e-learning environments. Computers & Education; 2013; 60,
Lin, X.; Gao, L. Students’ sense of community and perspectives of taking synchronous and asynchronous online courses. Asian Journal of Distance Education; 2020; 15,
Niemiec, C. P.; Ryan, R. M. Autonomy, competence, and relatedness in the classroom: Applying self-determination theory to educational practice. Theory and Research in Education; 2009; 7,
Pintrich, P. R. A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ); The University of Michigan, National Center for Research to Improve Postsecondary Teaching and Learning: 1991.
Popham, W. J. Instructional sensitivity: What it is and why it matters. Educational Measurement: Issues and Practice; 2007; 26,
Rudolph, J.; Tan, S.; Tan, S. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education?. Journal of Applied Learning and Teaching; 2023; 6,
Ryan, R. M.; Deci, E. L. Intrinsic and extrinsic motivation from a self-determination theory perspective: Definitions, theory, practices, and future directions. Contemporary Educational Psychology; 2020; 61, 101860. [DOI: https://dx.doi.org/10.1016/j.cedpsych.2020.101860]
Stroet, K.; Opdenakker, M. C.; Minnaert, A. Effects of need supportive teaching on early adolescents’ motivation and engagement: A literature review. Educational Research Review; 2013; 9, pp. 65-87. [DOI: https://dx.doi.org/10.1016/j.edurev.2012.11.003]
Taber, K. S. The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education; 2018; 48,
Tekkol, İ. A.; Demirel, M. An investigation of self-directed learning skills of undergraduate students. Frontiers in Psychology; 2018; 9, 410879. [DOI: https://dx.doi.org/10.3389/fpsyg.2018.02324]
Usher, M.; Amzalag, M. From prompt to polished: Exploring student–chatbot interactions for academic writing assistance. Education Sciences; 2025; 15,
Walker, S. L. Development of the distance education learning environments survey (DELES) for higher education. The Texas Journal of Distance Learning; 2005; 2,
Yang, X.; Zhang, M. GenAI distortion: The effect of GenAI fluency and positive affect. arXiv; 2024; arXiv: 2404.17822
Zviel-Girshin, R. The good and bad of AI tools in novice programming education. Education Sciences; 2024; 14,
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.