Content area
Background
Ethical decision-making is at the core of higher education, yet case-based ethics training often lacks depth and practical judgment. This study investigates whether integrating Artificial Intelligence (AI) and Virtual Reality (VR) enhances ethical reasoning compared with conventional training. Sixty undergraduates in business and engineering were randomly assigned to a control group (traditional case-based role-play) or an experimental group (immersive training with Meta Quest 3 head-mounted displays using the VirtualSpeech platform). The research methodology was grounded on the Descriptive Decision Theory and Learning-Oriented Assessment (LOA) framework, emphasizing formative, feedback-rich learning aligned with Cognitive Load Theory, Experiential and Constructivist Learning, Dual-Process Theory, and AI-driven adaptive guidance.
Results
Ethical competence was assessed pre- and post-intervention across seven dimensions: dilemma recognition, evaluation of alternatives, justification, consequence analysis, contextualization, application of principles, and stakeholder/social impact. Both groups improved significantly, but the AI/VR group showed consistently larger gains and improvement. Paired and Independent t-tests, with effect-size estimates (Cohen’s d and Hedges’ g), revealed large effects favoring immersive learning. The highest post-test advantages for the AI/VR group were observed in consequence analysis (t = −96.90, Δ = 23.30, p < 0.001), evaluation of alternatives (t = −90.03, Δ = 20.20, p < 0.001), and application of ethical principles (t = −80.57, Δ = 20.83, p < .001). Minor within-group dispersion and sample homogeneity supported internal consistency and robustness of the outcomes under immersive conditions.
Conclusions
Immersive, feedback-rich AI/VR training significantly outperformed traditional methods in strengthening ethical reasoning. The findings support integrating AI- and VR-based simulations into ethics curricula to enhance consequence analysis, principled reasoning, and stakeholder awareness. Future research should explore long-term effects, hybrid delivery, and broader applicability across disciplines and professional settings.
Introduction
Ethical decision-making is increasingly recognized as a core competency for undergraduate students across professional disciplines, particularly in contexts where value-laden judgments intersect with real-world complexity (Shin et al., 2023; Schuering & Schmid, 2024). Despite its importance, traditional ethics instruction often falls short of promoting deep engagement or fostering practical judgment skills, thus limiting students’ ability to transfer ethical reasoning into action (Moya et al., 2024).
To overcome these limitations, emerging technologies such as Artificial Intelligence (AI) and Virtual Reality (VR) are gaining traction as pedagogical tools capable of bridging the gap between theoretical instruction and applied moral reasoning (AlGerafi et al., 2023; AlAli & Wardat, 2024; Angel-Urdinola et al., 2022; Suguna et al., 2021; Wong et al., 2024; Baucells & Katsikopoulos, 2011). These tools offer immersive environments where learners can simulate dilemmas, receive real-time feedback, and rehearse decision-making in risk-free settings that approximate professional challenges.
However, while promising, the application of the AI and VR in ethics education remains underexplored. Existing studies largely center on technical training in STEM fields and often adopt conceptual or descriptive perspectives, leaving an empirical gap regarding their actual impact on ethical competence (Almusaed et al., 2023; Shin et al., 2023; Schuering & Schmid, 2024). Few studies have rigorously assessed whether technology-enhanced environments genuinely improve ethical sensitivity, reasoning quality, or learner engagement (Aharoni et al., 2024; Angel-Urdinola et al., 2022). Moreover, ethics education studies often prioritize conceptual frameworks over experimental validation, hence, leaving gaps in understanding their effectiveness for ethical reasoning.
To address this empirical and pedagogical gap, this study evaluates the effectiveness of AI- and VR-supported instruction for enhancing ethical decision-making among undergraduate students. Grounded in the Descriptive Decision Theory and Learning-Oriented Assessment (LOA) framework (Baucells & Katsikopoulos, 2011; Çakmak et al., 2023; Chandler, 2017; Davidson & Coombe, 2022; González et al., 2024), the intervention employed Meta Quest 3 headsets and the Virtual Speech platform to immerse participants (students) in simulated ethical scenarios, comparing outcomes to those from a traditional instruction group through a controlled pre/post experimental design (see Methodology section).
The design is anchored on the existing theoretical frameworks: Cognitive Load Theory (Sweller et al., 2019) which informs the AI-driven scaffolding by aiming to reduce extraneous processing and enhance germane cognitive resources during ethical reasoning. Kolb’s Experiential Learning Theory (1984) that supports the use of immersive VR environments to promote abstract conceptualization through active engagement with realistic moral scenarios (Rest, 1986), . Constructivist Learning Theory (Piaget, 1971; Vygotsky, 1978) which underpins the co-construction of ethical meaning through interaction with complex dilemmas. AI-Driven Adaptive Learning models that justify the personalization of ethical guidance based on students’ evolving decision patterns, reinforcing reflection through real-time feedback. Finally, Dual-Process Theory of Moral Judgment (Kahneman, 2011) which frames the intervention’s intent to balance intuitive (System 1—thinking-rapid and heuristic-based) (Kahneman, 2011) and deliberative processes (System 2—structured analytic reasoning) (Moya et al., 2024) by combining rapid scenario exposure with scaffolded ethical justification.
In these perspectives, methodologically, the Descriptive Decision Theory (Baucells & Katsikopoulos, 2011) and the principles of Learning-Oriented Assessment (Carless, 2007; Andrade & Cizek, 2010) form the backbone of this research investigation, experiments, and discussions. The former offers a decision-analytic lens to examine how students weigh options under uncertainty and ambiguity conditions that reflect real-world ethical dilemmas. The latter provides a formative assessment framework centered on feedback, learner agency, and iterative improvement, which in turn, guided both the instructional structure and the scoring process. Together, these frameworks reinforce the decision-making realism and evaluative rigor of the experimental design.
Recent contributions in STEM education highlight that explicit-reflective AI ethics modules have proved to strengthen students’ ethical knowledge and problem-solving in science and engineering programs (Falebita & Kok, 2025; Nam & Bai, 2023; Usher & Barak, 2024), while discourse analyses around generative AI unpack and aim to address issues related to integrity and responsible use of AI in higher education. In parallel, undergraduates’ technological readiness, self-efficacy, and attitudes are key drivers of AI-tool uptake and learning value (Falebita & Kok, 2025). These developments frame the present study’s focus on immersive AI–VR training aimed at enhancing ethical decision-making and competency in learners (Falebita & Kok, 2025; Nam & Bai, 2023; Usher & Barak, 2024).
Guided by these theoretical pillars, the study investigates the following research questions:
To what extent do AI and VR enhance ethical decision-making skills in undergraduate students?
How do improvements observed in AI- and VR-based learning compare to those achieved through traditional training methods?
Accordingly, we hypothesize that AI- and VR-based instruction will lead to greater improvements in students’ ethical reasoning, engagement, and satisfaction than conventional approaches. Empirical validation of this hypothesis aims to provide evidence-based guidance for the design of technology-enhanced ethics curricula and contribute to the growing literature on immersive learning for moral development and ethical decision-making skills and strategies among the students.
Background information
Applications of AI and VR technologies
The integration of Artificial Intelligence (AI) and Virtual Reality (VR) is transforming education by enhancing cognitive and ethical decision-making skills (AlGerafi et al., 2023; AlAli & Wardat, 2024; Angel-Urdinola et al., 2022; Suguna et al., 2021; Wong et al., 2024). AI refers to computational systems that simulate human intelligence through data processing, pattern recognition, and informed decision-making, often using machine learning and natural language processing (Wilbanks et al., 2024). VR immerses users in digitally simulated environments that replicate or create new experiences, enabling interactive and experiential learning (Hollaender et al., 2023; Schicktanz et al., 2023). Together, these technologies foster critical thinking and problem-solving across fields such as healthcare, business, and engineering (Baucells & Katsikopoulos, 2011; Nieto et al., 2019). By combining AI-driven adaptive learning and VR-based simulations, educational institutions bridge the gap between theory and practice, allowing students to engage dynamically with ethical dilemmas in realistic contexts.
AI also offers real-time feedback mechanisms tailored to individual learning patterns, while VR facilitates experiential learning that reinforces ethical reasoning (Cheung et al., 2024; Schicktanz et al., 2023). For instance, VR enhances surgical training and patient communication in healthcare (Hollaender et al., 2023), whereas AI-driven simulations improve strategic decisions in business (Rashid et al., 2021). These applications illustrate AI and VR’s potential to revolutionize ethical education, addressing the limitations of traditional methods in offering realistic, hands-on experiences.
AI and VR for ethical decision-making in higher education
Higher education curricula often prioritize technical skills over ethical reasoning, despite professionals regularly facing complex ethical dilemmas (Schuering & Schmid, 2024). Although frameworks such as deontology, utilitarianism, and virtue ethics provide structured approaches to moral reasoning (Rest, 1986), their application in real-world scenarios remains abstract (Moya et al., 2024).
AI and VR can bridge this gap by immersing students in ethically complex situations where they must apply ethical theories in dynamic, high-stakes contexts (Shin et al., 2023; Schuering & Schmid, 2024). AI systems personalize training by analyzing decision patterns and delivering real-time adaptive feedback (World Economic Forum, 2021; Wilbanks et al., 2024), while VR enables students to experience the consequences of their decisions, fostering critical thinking and ethical foresight (Wibisono et al., 2024).
Nevertheless, implementing AI and VR in ethics education raises concerns regarding algorithmic bias, transparency, and data privacy (Almusaed et al., 2023). AI feedback must be objective and free from bias, while VR simulations should maintain realism to prevent desensitization to ethical issues (Oliveira et al., 2024). Thus, ensuring that AI- and VR-based ethics training remains ethical, inclusive, and pedagogically sound is essential for their effective adoption in higher education.
Current state of research: AI and VR for ethical decision-making
Advances in AI-driven ethical reasoning frameworks, immersive VR simulations, and hybrid AI–VR models are shaping new methodologies for ethical competence development. However, research gaps persist regarding their long-term effectiveness and real-world applicability.
Technological approaches and methods
A) AI-driven ethical reasoning frameworks
Machine learning models and decision trees have been used to analyze ethical dilemmas and offer recommendations based on ethical theories (Wilbanks et al., 2024; Zhang & Zhang, 2023). Recent studies combine deep learning with symbolic reasoning to enhance interpretability and transparency in AI-assisted ethical decision-making (Uddin, 2024). These models have shown to enable personalized learning experiences, adapting to students’ ethical reasoning development.
b) VR-based ethical simulations
VR-based systems and applications have shown to enhance ethical decision-making by simulating real-world dilemmas, allowing students to assess consequences, contextualize cases, and apply ethical principles (Schicktanz et al., 2023; Shin et al., 2023). Immersive VR environments can increase emotional engagement, and in turn, make ethical dilemmas more tangible and impactful (Lim et al., 2022).
c) Hybrid AI–VR models
A promising approach integrates AI-driven feedback within VR simulations, allowing real-time guidance that is tailored to students' ethical reasoning processes (Cheung et al., 2024). Such models can track decision patterns, providing adaptive learning pathways that reinforce ethical principles. Studies suggest that hybrid AI–VR models enhance engagement, critical thinking, and moral reflection, making them an effective tool for ethical education (Asante et al., 2024).
Literature review: main gaps and guiding solutions
Our review of the available literature shows that the use of immersive technology for ethics training still faces four recurrent weaknesses: limited ecological validity, algorithmic bias, short-term evaluation, and disciplinary silos (Oliveira et al., 2024; Cotič et al., 2024; Setyawarno et al., 2024; Varas et al., 2023). To address these gaps, our intervention merges AI-driven scaffolding with VR simulations, drawing on the relevant existing frameworks: Cognitive Load Theory (Sweller et al., 2019) which shapes adaptive feedback that filters extraneous information; Kolb’s Experiential Learning Cycle (1984) which justifies placing students in realistic, consequence-laden scenarios; and Constructivist Learning Theory (Piaget, 1971; Vygotsky, 1978) which underpins guided debriefings where participants co-construct ethical meaning. Moreover, the Descriptive Decision Theory (Baucells & Katsikopoulos, 2011; Kahneman & Tversky, 1979) and Learning-Oriented Assessment (LOA) (Carless, 2007; Andrade & Cizek, 2010; Davidson & Coombe, 2022; González et al., 2024), which emphasizes formative feedback, learner agency, and the cultivation of self-regulated competencies: helps to inform the simulation design by framing ethical decisions as probabilistic judgments. The LOA aims to improve feedback by involving students in the assessment process and focus on learning through active engagement, rather than just measuring achievement. While the method (Descriptive Decision Theory) help to analyze how individuals evaluate and justify ethical choices through statistical measures, thus, further justifying the use of real-time, feedback-rich environments to recalibrate students’ moral sensitivity, done in this study.
Design features explicitly tackle some of the weaknesses identified in the literature: authentic stakeholder tension enhances ecological validity; diverse datasets and model audits mitigate bias; a longitudinal tracking plan supports durability testing; and a cross-disciplinary design team bridges methodological silos (Schuett, 2023; Almusaed et al., 2023; Aharoni et al., 2024; Han, 2025). The following Methodology section details how the data sampling, randomization, instruments and analyses operationalize these core principles.
Methodology
Research design and theoretical framework
The method of this study is grounded in two main theoretical frameworks: Descriptive Decision Theory (Baucells & Katsikopoulos, 2011; Kahneman & Tversky, 1979) and Learning-Oriented Assessment (LOA) (Carless, 2007; Andrade & Cizek, 2010; Davidson & Coombe, 2022; González et al., 2024). The Descriptive Decision Theory (Baucells & Katsikopoulos, 2011) models how individuals make decisions under uncertainty, complexity, and bounded rationality through statistical measures. This framework aligns with the study’s aim of examining how students address real-world ethical dilemmas rather than adhere to prescriptive moral standards. From an instructional perspective, the method adopts the Learning-Oriented Assessment framework (Carless, 2007; Andrade & Cizek, 2010; Davidson & Coombe, 2022; González et al., 2024), which emphasizes formative feedback, learner agency, and the cultivation of self-regulated competencies. This pedagogical approach guided both the design of the feedback mechanisms and the evaluation model in this study.
As outlined in the introduction, the study also drew its foundation and recommendations from existing theories that underpin the integration of AI and VR for learning purposes or improvement: Constructivist Learning Theory, Experiential Learning, Cognitive Load Theory, AI-Driven Adaptive Learning, and the Dual-Process Theory of Moral Judgment. Together, these frameworks support the use of immersive and adaptive tools to enhance ethical reasoning, by balancing intuitive and analytical processes while promoting engagement with complex moral scenarios.
Together, these theories shape the experimental framework, pedagogical logic, and evaluative procedures done in this present study.
Data sampling
This study was conducted in a higher education setting with 60 undergraduate students. After obtaining informed consent, the authors used Excel’s random-number generator to assign participants to either a control (n = 30) or an experimental group (n = 30). Randomization was block-stratified by academic program (business vs. engineering) and gender, producing comparable groups and strengthening internal validity. The control group (n = 30) engaged in a traditional classroom-based learning model by participating in structured role-playing case studies where students assumed predefined stakeholder roles in a boardroom-style discussion to analyze an AI ethics dilemma. This method aimed to replicate real-world ethical challenges through interactive deliberation, requiring students to justify their decisions based on ethical principles and anticipate potential stakeholder concerns.
In contrast, the experimental group (n = 30) underwent training using Meta Quest 3 headsets and the VirtualSpeech platform, which provided an immersive, AI-powered simulation for ethical decision-making. Meta Quest 3 is an advanced wireless VR headset with enhanced graphical fidelity, real-time spatial tracking, and AI-driven interactive environments, allowing users to experience dynamic, real-world ethical dilemmas in a controlled environment. The VirtualSpeech platform integrates voice recognition and automated feedback systems to simulate real-time ethical dilemmas, allowing students to interact with AI-powered avatars that assess responses and provide structured feedback. Participants in this group engaged in decision-based branching scenarios, where they had to assess ethical risks, justify their choices, and respond dynamically to AI-generated stakeholder feedback. This design allowed for adaptive, scenario-based learning, enabling students to experience the immediate consequences of their ethical decisions and refine their reasoning through iterative decision-making cycles.
To assess ethical decision-making skills, both groups completed pre- and post-assessment surveys, evaluating key dimensions of ethical reasoning. As summarized in Table 2, the assessment criteria were selected based on established ethical frameworks, ensuring a comprehensive evaluation of students' ability to identify ethical dilemmas, analyze alternatives, justify decisions, and consider social and professional implications.
To ensure scoring reliability, all responses were independently evaluated using a four-level ordinal scale—Insufficient, Basic, Satisfactory, and Outstanding—and cross-tabulated in a 4 × 4 agreement matrix. Two faculty experts specializing in ethics and instructional design conducted the assessment independently. Inter-rater consistency reached 76.7% agreement (23 of 30 cases), with a Cohen’s κ coefficient of 0.68, indicating substantial reliability according to Landis and Koch (1977). The analysis was performed using the Cohen_kappa_score function in Python’s sklearn.metrics module. Importantly, the raters were blinded to participants’ group assignment, ensuring objectivity and minimizing potential bias.
Experimental process
The experimental process, summarized in Fig. 1, consisted of participant preparation, tailored ethical decision-making interventions, and post-intervention assessments used to evaluate the students’ learning outcomes. The control group engaged in a traditional role-playing case study, assuming predefined stakeholder roles within a boardroom-style discussion to deliberate on an AI ethics dilemma. While the experimental group, in contrast, participated in an interactive VR simulation powered by VirtualSpeech’s AI-driven avatars. Each student received a scenario-specific prompt and interacted with the avatar in real time, addressing dynamic stakeholder perspectives and ethical complexities. The full structure of the scenario, including role-based prompts, AI interaction flow, and feedback mechanisms, is detailed in Supplementary file 1 (SC, Fig. SC1).
[See PDF for image]
Fig. 1
Flowchart of the experimental design process
Both groups were evaluated using the Ethical Decision-Making Evaluation Rubric (Supplementary file 1, SB) (Association of American Colleges and Universities, 2009), ensuring consistent assessment criteria across traditional and immersive learning environments. The inferential data analysis (see Methodology) examines improvements in ethical reasoning, decision justification, and stakeholder impact assessment, providing insights into the comparative effectiveness of each instructional method.
Assessment of ethical decision-making skills
To assess students' ethical decision-making consistently and objectively, a structured rubric (Table 1), based on established ethical competence frameworks (Association of American Colleges and Universities, 2009), was used across both groups. Cronbach’s alpha (α) ranged from .90 (pre-test) to .94 (post-test), evidencing strong internal consistency. The rubrics evaluate seven key dimensions, each with clear performance descriptors to standardize assessments.
Table 1. Rubric for ethical decision-making skills assessment
Criterion | Description | Score range | Performance level descriptors |
|---|---|---|---|
Recognition of Ethical Dilemmas | Identifies ethical dilemmas clearly | 0–100% | High: comprehensive and clear Medium: adequate but partial Low: minimal or unclear identification |
Evaluation of Alternatives | Analyzes alternatives, considering pros and cons | 0–100% | High: thorough and balanced Medium: basic, lacks depth Low: minimal or absent analysis |
Justification of Decisions | Provides logical justifications for decisions | 0–100% | High: strong and logical Medium: basic support Low: weak or absent justification |
Consideration of Consequences | Considers short- and long-term impacts | 0–100% | High: comprehensive Medium: basic, lacks depth Low: minimal or no consideration |
Contextualization of Cases | Contextualizes the case broadly | 0–100% | High: thorough Medium: basic context Low: minimal or absent context |
Application of Ethical Principles | Apply relevant ethical principles | 0–100% | High: well-applied Medium: basic application Low: minimal or no application |
Assessment of Social Impact | Evaluates impact on stakeholders | 0–100% | High: thorough Medium: basic evaluation Low: minimal or absent evaluation |
Initial assessment
Both groups completed an individual ethical decision-making task as a baseline skills assessment, evaluated independently by two professors with averaged results to ensure objectivity. The same method was used for post-intervention assessments.
Assessment Criteria: Scores (0–100%) were based on the rubrics (see Table 1) designed for recognition of ethical dilemmas, evaluation of alternatives, justification of decisions, consideration of consequences, contextualization, application of ethical principles, and assessment of social impact (Table 1).
Pre-intervention survey: Both groups (experimental and control) took a survey measuring initial interest in the training, perceived ethical decision-making skills, and satisfaction with feedback, involving three Likert-scale questions (Q1-3) of scale 1 to 5 (1 = strongly disagree, 5 = strongly agree) and one open-ended question (Q4) (see Supplementary file 1, SA).
How interested are you in the training materials and methods provided?
How do you rate your current ethical decision-making skills?
How satisfied are you with the feedback and evaluation you have received in past training?
Please describe your personal experience with ethical decision-making skills training so far.
Intervention
Control group: Received traditional ethical decision-making skills training, including lectures, readings, and practice sessions without AI or VR.
Experimental group: Utilized Meta Quest 3 headsets and the VirtualSpeech platform. They practiced ethical decision-making tasks in a fully immersive VR environment and received real-time AI-driven feedback. The simulation, guided by scripted prompts, placed students in dynamic stakeholder interactions where their responses influenced the avatar’s behavior and the progression of the dilemma. A detailed description of the VR-based branching scenario, including context, role, interaction flow, and instructional structure, is provided in Supplementary file 1 (SC).
Final assessment
Both groups completed a second (post-test) ethical decision-making task like the initial assessment. The same assessment criteria (Supplementary file 1, SA) were used to evaluate the post-intervention ethical decision-making skills.
Post-intervention survey:
Control group: Completed a follow-up and post-intervention survey on interest, perceived improvement, and satisfaction, using the same three Likert-scale questions and one open-ended question as per pre-intervention survey (see Supplementary file 1, SA).
Experimental group: Completed the same follow-up survey as control group, with an additional Likert-scale question to assess their perception of skill improvement in accordance with the VR/AI tools used.
Key dimensions of ethical decision-making
Ethical decision-making involves cognitive, analytical, and contextual considerations, requiring individuals to navigate complex dilemmas with structured reasoning (Shin et al., 2023; Schuering & Schmid, 2024; Wibisono et al., 2024). Based on the existing literature and empirical findings from this study, seven competencies were identified as critical for assessing ethical decision-making. These competencies align with the prior research emphasizing theoretical application, critical evaluation, and real-world ethical reasoning.
Table 2 summarizes each criterion, detailing its definition, significance, and supporting references, illustrating the theoretical underpinnings that inform ethical decision-making and assessments.
Table 2. Ethical decision-making criteria and their importance
Criterion | Definition | Importance | Key references |
|---|---|---|---|
Recognition of Ethical Dilemmas | The ability to identify ethical issues at an early stage | Essential for timely problem-solving and ethical decision-making in complex professional settings | (Moya et al., 2024; Oliveira et al., 2024) |
Evaluation of Alternatives | The process of analyzing different courses of action before making a decision | Help individuals understand potential outcomes and make informed decisions, particularly in multidisciplinary fields | (Cotič et al., 2024; Setyawarno et al., 2024) |
Justification of Decisions | The ability to defend ethical choices with logical and evidence-based reasoning | Strengthens credibility, supports ethical integrity, and enhances professional accountability | (Wilbanks et al., 2024) |
Consideration of Consequences | The ability to foresee and mitigate potential ethical and social impacts of decisions | Encourages critical thinking and aligns decisions with ethical standards and stakeholder interests | (Setyawarno et al., 2024; Han, 2025) |
Contextualization of Cases | Understanding ethical decisions within broader technical, social, and environmental contexts | Enhances relevance and accuracy in ethical decision-making, particularly in real-world applications | (Schicktanz et al., 2023; Zhang & Zhang, 2023) |
Application of Ethical Principles | The ability to apply established ethical norms and professional standards in practice | Ensures theoretical knowledge translates into ethical competence in professional environments | (Almusaed et al., 2023; Schuett, 2023) |
Assessment of Social Impact | Evaluating how ethical decisions affect different stakeholders and society at large | Promotes responsible decision-making by considering long-term societal consequences | (Setyawarno et al., 2024; Han, 2025) |
Implications for ethical training and future research
The structured competencies in Table 2 underscore the potential of AI- and VR-based interventions to enhance ethical decision-making. These immersive environments can help students:
Assess ethical consequences more effectively.
Apply ethical principles in decision-making contexts.
Contextualize dilemmas within broader social and professional frameworks.
These competencies support integrating AI and VR to foster critical thinking and moral reasoning. Nevertheless, further research is needed to evaluate the long-term retention and real-world application of these skills.
Data description and statistical analysis
1) Descriptive statistics
Table 3 summarizes the mean scores (%) for each ethical decision-making criterion in both the control and experimental groups, measured before and after the intervention. These descriptive statistics provide an overview of the changes in ethical reasoning and performance, highlighting potential improvements attributable to AI and VR-based training.
Table 3. Mean scores for each ethical decision-making criterion
Criterion | Control group (pre) (%) | Control group (post) (%) | Experimental group (pre) (%) | Experimental group (post) (%) |
|---|---|---|---|---|
Recognition of Ethical Dilemmas | 65 | 72 | 66 | 78 |
Evaluation of Alternatives | 64 | 70 | 59 | 79 |
Justification of Decisions | 63 | 65 | 61 | 76 |
Consideration of Consequences | 55 | 63 | 56 | 79 |
Contextualization of Cases | 65 | 68 | 61 | 84 |
Application of Ethical Principles | 60 | 65 | 60 | 81 |
Assessment of Social Impact | 58 | 61 | 59 | 78 |
2) Statistical analysis (quantitative analysis)
To assess the impact of the teaching models or interventions (Traditional, AI and VR intervention), a series of statistical tests were conducted using both the mean scores for the rubrics developed for ethical decision-making assessment and the intervention survey pre- and post-intervention (see Data analysis):
Assumption checks: Normality (Shapiro–Wilk) and homogeneity of variance (Levene’s text) were satisfied for all variables. Details of data diagnostics are provided in Supplementary file 1, SD, Table SD1).
Paired t-tests were performed within each group (control vs. experimental) to determine whether post-intervention scores showed statistically significant improvements.
Independent t-tests compared the post-intervention mean differences between control and experimental groups, evaluating the overall effect of the intervention.
Effect-size analysis: Hedges’ g with pooled SD and 95% confidence intervals (CI) quantified the magnitude of improvements across the seven ethical-decision dimensions, facilitating cross-study comparison (Hedges & Olkin, 1985).
The significance level was set at p ≤ 0.05. All computations were carried out in Python Software (SciPy, pandas; Python Software Foundation, 2023).
3) Qualitative analysis
To complement the quantitative findings, the open-ended survey responses (Supplementary file 1, SA) were analyzed using thematic analysis, a widely recognized qualitative research method (Braun & Clarke, 2006). This approach allowed for identification of key themes used to provide deeper insights into:
The students' ethical reasoning processes,
Their engagement with the AI and VR technologies, and
The perceived impact of the intervention on decision-making skills.
This qualitative component (see Data analysis) enhances the study’s comprehensiveness by capturing subjective experiences and cognitive reflections that may not be fully represented through the quantitative measures alone.
4) Ethical considerations
The study adhered to the ethical guidelines for research of this nature. Informed consent was obtained from all participants, ensuring voluntary participation and transparency regarding the study’s objectives. Measures were taken to protect confidentiality and anonymity in alignment with both the institutional and ethical research standards. All participants involved in the intervention provided explicit informed consent prior to their participation. Participation was entirely voluntary, and participants were clearly informed that their responses would remain confidential, anonymous, and used solely for academic and research purposes. All ethical considerations regarding participant confidentiality, voluntary participation, and informed consent were rigorously respected throughout the study.
5) Supplementary material
Extended materials are provided in Supplementary file 1, including the complete instruments and items, the full ethical decision-making rubric descriptors, the VR branching scenario and AI-feedback prompts, and statistical diagnostics.
Data analysis and results
Analysis of mean scores for rubrics developed for ethical decision-making assessment
Paired sample statistics (control group)
Table 4 presents the paired t-test results for the control group, comparing the pre- and post-intervention scores across all the ethical decision-making criteria (see Tables 1 and 3). While statistically significant improvements (p < 0.001) were observed, the effect sizes indicate that the impact of the traditional methods was limited compared to the AI- and VR-based method (see Experimental group analysis). The most notable improvements occurred in Consideration of Consequences (t = –24.75, mean diff. = 8.02, p < 0.001), Evaluation of Alternatives (t = –19.72, mean diff. = 5.87, p < 0.001), and Recognition of Ethical Dilemmas (t = –31.68, mean diff. = 5.57, p < 0.001), while Justification of Decisions (t = − 9.16, mean diff. = 1.82, p < 0.001) showed the smallest gain. These results suggest that, although traditional instruction enhances ethical reasoning, its impact is temperate compared to AI and VR-based training (see Table 5).
Table 4. Paired t test—control group (n = 30)
Criterion | Mean Pre | SD Pre | Mean Post | SD Post | Mean Diff. (Δ) | SDΔ | t | Sig. (p) |
|---|---|---|---|---|---|---|---|---|
Recognition of Ethical Dilemmas | 64.90 | 0.77 | 70.47 | 0.89 | 5.57 | 0.96 | − 31.68 | < 0.001 |
Evaluation of Alternatives | 64.18 | 1.04 | 70.05 | 1.18 | 5.87 | 1.63 | − 19.72 | < 0.001 |
Justification of Decisions | 63.10 | 1.32 | 64.92 | 1.28 | 1.82 | 1.09 | − 9.16 | < 0.001 |
Consideration of Consequences | 54.75 | 1.60 | 62.77 | 0.43 | 8.02 | 1.77 | − 24.75 | < 0.001 |
Contextualization of Cases | 64.98 | 1.52 | 67.85 | 0.88 | 2.87 | 1.30 | − 12.08 | < 0.001 |
Application of Ethical Principles | 60.20 | 1.59 | 65.07 | 1.10 | 4.87 | 1.51 | − 17.61 | < 0.001 |
Assessment of Social Impact | 57.90 | 0.82 | 60.90 | 1.26 | 3.00 | 1.33 | − 12.39 | < 0.001 |
Significant level (p ≤ 0.05), Δ = mean difference
High t-values reflect low within-group SD, n = 30; negative signs indicate post > pre (Δ = pre–post)
Table 5. Paired t test—experimental group (n = 30)
Criterion | Mean Pre | SD Pre | Mean Post | SD Post | Mean Diff. (Δ) | SDΔ | t | Sig (p) |
|---|---|---|---|---|---|---|---|---|
Recognition of Ethical Dilemmas | 65.98 | 0.90 | 78.02 | 0.79 | 12.03 | 1.35 | − 48.77 | < 0.001 |
Evaluation of Alternatives | 59.10 | 1.21 | 79.30 | 0.77 | 20.20 | 1.23 | − 90.03 | < 0.001 |
Justification of Decisions | 60.93 | 1.49 | 76.10 | 1.04 | 15.17 | 1.35 | − 61.35 | < 0.001 |
Consideration of Consequences | 55.83 | 1.30 | 79.13 | 0.80 | 23.30 | 1.32 | − 96.90 | < 0.001 |
Contextualization of Cases | 60.88 | 1.43 | 84.32 | 2.42 | 23.43 | 2.59 | − 49.58 | < 0.001 |
Application of Ethical Principles | 60.15 | 1.18 | 80.98 | 1.44 | 20.83 | 1.42 | − 80.57 | < 0.001 |
Assessment of Social Impact | 58.78 | 1.24 | 78.40 | 1.12 | 19.62 | 1.48 | − 72.42 | < 0.001 |
Significant level (p ≤ 0.05), Δ = Mean difference. High t-values reflect low within-group SD and n = 30; negative signs indicate post > pre (Δ = pre–post)
Paired sample statistics (experimental group)
Table 5 shows the paired t-test results for the experimental group, demonstrating highly significant improvements (p < 0.001) across all criteria. The table (Table 5) shows that the immersive and interactive nature of AI and VR interventions resulted in substantially greater improvements compared to the control group (see Table 4). The largest gains were observed in Evaluation of Alternatives (t =− 90.03, mean diff. = 20.20, p < 0.001), Consideration of Consequences (t = − 96.90, mean diff. = 23.30, p < 0.001), and Application of Ethical Principles (t = − 80.57, mean diff. = 20.83, p < 0.001), highlighting the effectiveness of real-time AI feedback and VR simulations in strengthening ethical decision-making in the learners.
Independent t-tests: post-intervention (control vs experimental)
To evaluate the overall impact of the AI and VR on ethical decision-making skills versus the traditional training method, independent samples t-tests was conducted (Table 6), comparing the post-intervention scores between the control and experimental groups. The results indicate that the experimental group outperformed the control group across all criteria (p < 0.001), confirming the superiority and advantage of immersive AI- and VR-based training over the traditional method.
Table 6. Post-test comparison between groups (independent t-test, Cohen’s d, and Hedges’ g, n = 30 per group)
Criterion | Control post (M ± SD) | Experimental post (M ± SD) | t (58) | Cohen’s d | Hedges’ g |
|---|---|---|---|---|---|
Recognition of Ethical Dilemmas | 70.47 ± 0.89 | 78.02 ± 0.79 | 34.75 | 8.97 | 8.86 |
Evaluation of Alternatives | 70.05 ± 1.18 | 79.30 ± 0.77 | 35.96 | 9.28 | 9.16 |
Justification of Decisions | 64.92 ± 1.28 | 76.10 ± 1.04 | 37.13 | 9.59 | 9.46 |
Consideration of Consequences | 62.77 ± 0.43 | 79.13 ± 0.80 | 98.66 | 25.47 | 25.14 |
Contextualization of Cases | 67.85 ± 0.88 | 84.32 ± 2.42 | 35.03 | 9.05 | 8.93 |
Application of Ethical Principles | 65.07 ± 1.10 | 80.98 ± 1.44 | 48.09 | 12.42 | 12.26 |
Assessment of Social Impact | 60.90 ± 1.26 | 78.40 ± 1.12 | 56.86 | 14.68 | 14.49 |
Equal variances assumed (Levene’s p > 0.30); p < 0.001 for all t tests. Within-group SDs are ≈ 1 point on a 0–100 scale, inflating effect sizes (see Preliminary Statistical Assumptions Check)
ag corrects d for small-sample bias: g = d × J, J = 1 – 3/(4N – 9), n = 60
bt recalculated with Δ / (SDΔ / √n); value ≈ 27.2 if SDΔ = 1.77 and n = 30
The most pronounced differences were observed in Assessment of Social Impact (mean diff. = 17.50, t = 56.86, p < .001), Contextualization of Cases (mean diff. = 16.47, t = 35.03, p < .001), and Consideration of Consequences (mean diff. = 16.36, t = 98.66, p < .001). These findings suggest that AI and VR technologies enhance students' ability to contextualize ethical dilemmas, anticipate consequences, and assess broader societal impacts, particularly in areas where traditional methods had a more limited effect (Table 6).
To complement the significance testing, effect sizes were calculated using Cohen’s d and Hedges’ g (see Table 6) (Hedges & Olkin, 1985), . These metrics offer a standardized interpretation of the magnitude of the observed differences, independent of sample size. The extremely high values obtained (e.g., g = 25.14 for Consideration of Consequences) primarily reflect the homogeneity of the sample and the low within-group standard deviations observed on a 0 to 100 scale. Cohen’s d values are unusually large because of the low standard deviations on a 0 to 100 scale; such magnitudes primarily reflect the sample’s homogeneity rather than solely the intervention’s effect size. While such magnitudes may suggest powerful effects, on the other hand, caution should be taken to avoid overestimation of the practical significance. Nonetheless, the consistency of large g values across all criteria reinforces the robustness of the intervention’s impact. Hedges’ g was preferred over d for between-group comparisons given its correction for small-sample bias, offering a more conservative estimate suitable for pilot studies with moderate sample sizes.
In summary, while both groups (control and experimental) showed statistically significant improvements, the AI- and VR-supported interventions led to substantially greater gains. These outcomes, reinforced by the consistently large effect sizes (Table 6), underscore the pedagogical advantages of immersive, scenario-based environments that engage learners in real-time ethical reasoning. Despite the unusually high d and g values, partly attributable to the sample’s homogeneity and narrow score dispersion, the evidence supports the superiority of immersive technologies over traditional instruction in fostering deeper and more transferable ethical competencies (AlGerafi et al., 2023; Angel-Urdinola et al., 2022; Wong et al., 2024).
Effect size analysis
To further assess the magnitude of improvements between the AI- and VR-based group (experimental) and the traditional group (control), Cohen’s d effect sizes were calculated (Table 7). The results confirm substantial effect size differences between the control vs experimental groups, demonstrating that AI and VR (experimental) had a very high impact on ethical decision-making skills (Table 7).
Table 7. Cohen’s d effect sizes for the ethical decision-making criteria
Criterion | Cohen’s d (Control) | Interpretation | Cohen’s d (Experimental) | Interpretation |
|---|---|---|---|---|
Recognition of Ethical Dilemmas | 5.80 | Very large | 8.91 | Very large |
Evaluation of Alternatives | 3.60 | Very large | 16.42 | Very large |
Justification of Decisions | 1.67 | Very large | 11.24 | Very large |
Consideration of Consequences | 4.53 | Very large | 17.65 | Very large |
Contextualization of Cases | 2.21 | Very large | 9.05 | Very large |
Application of Ethical Principles | 3.23 | Very large | 14.67 | Very large |
Assessment of Social Impact | 2.26 | Very large | 13.26 | Very large |
Computation: d = Δ / SDΔ = t / √n (for paired tests, n = 30)
Interpretation thresholds: small < 0.5; medium 0.5–0.8; large 0.8–1.3; very large > 1.3 (Sawilowsky, 2009). Effect sizes are reported as Cohen’s d. Conversion to Hedges’ g yields virtually identical values (J ≈ 0.99), as recommended by Morris (2008)
As shown in Table 7, the highest effect sizes were observed in:
Consideration of consequences (d = 17.65).
Evaluation of alternatives (d = 16.42).
Application of ethical principles (d = 14.67).
These findings reinforce the statistical analysis (see paired and independent t-tests), highlighting the role of immersive simulations and adaptive AI feedback in enhancing ethical reasoning beyond traditional case-based approaches.
The high effect sizes in the experimental group (Table 7) align with previous research on the effectiveness of AI and VR in educational settings (AlGerafi et al., 2023; Conrad et al., 2024; Wong et al., 2024). Prior studies show that these technologies enhance engagement, reinforce ethical principles through interactive feedback, and provide realistic ethical dilemmas, leading to stronger decision-making competencies.
Notably, the significant improvements in Consideration of Consequences (d = 17.65), Evaluation of Alternatives (d = 16.42), and Application of Ethical Principles (d = 14.67) suggest that VR’s immersive nature helps students better anticipate the real-world impact of their ethical decisions, while AI-driven feedback fosters more structured ethical reasoning processes.
Consequentially, these findings (Table 7) provide compelling evidence that AI and VR serve as powerful tools for ethical decision-making training (World Economic Forum, 2021), addressing limitations of traditional instructional methods and supporting the broader integration of immersive technologies in ethics education (Sari et al. (2021).
Survey analysis (pre- and post-intervention)
This section presents a structured analysis of the survey responses (pre and post) used to evaluate the impact of AI and VR-enhanced training on the students' ethical decision-making skills. The analysis follows a comparative approach, assessing both the control (n = 30) and experimental (n = 30) groups through descriptive statistics, paired t-tests, and independent t-tests. This was done to measure the impact of the teaching methods and approach on the student´s learning outcomes, engagement, and satisfaction through the feedback. The data analysis and findings aim to provide insights into the effectiveness of immersive learning in fostering ethical competencies (Kuhail et al., 2022; Uriarte-Portillo et al., 2022).
Common survey questions: key constructs and findings
To ensure comparability across the groups (control vs experimental), the analysis focused on three core survey questions measured through Likert scale (Supplementary file 1, SA), before and after the intervention:
Interest in training materials and methods—measures student engagement and perceived relevance of instructional content.
Perceived ethical decision-making skills—assesses students' confidence in their ethical reasoning and problem-solving abilities.
Satisfaction with feedback and evaluation—evaluates the perceived usefulness and clarity of feedback mechanisms.
The descriptive statistics (Table 8) indicate that the experimental group exhibited substantial improvements across all three dimensions, while the control group showed negligible or no meaningful changes.
Table 8. Descriptive statistics for common survey questions
Question | Control group (pre) | Std. Dev (pre) | Control group (post) | Std. Dev. (post) | Mean Diff | Exp. group (pre) | Std. Dev. (pre) | Exp. group (post) | Std. Dev. (post) | Mean Diff |
|---|---|---|---|---|---|---|---|---|---|---|
Interest in training materials and methods | 2.40 | 0.87 | 2.33 | 0.91 | − 0.07 | 2.27 | 0.83 | 3.93 | 0.82 | + 1.66 |
Perceived ethical decision-making skills | 2.37 | 0.94 | 2.40 | 0.92 | + 0.03 | 2.30 | 0.89 | 4.10 | 0.78 | + 1.80 |
Satisfaction with feedback and evaluation | 2.07 | 0.91 | 2.07 | 0.89 | 0.00 | 2.13 | 0.88 | 4.20 | 0.75 | + 2.07 |
It is important to mention, additionally, that the experimental group responded to an extra post-intervention question assessing their self-perceived improvement in ethical decision-making. The responses showed a high level of perceived improvement, with a mean score of 4.73 (SD = 0.44), indicating a strong consensus among students regarding the effectiveness of AI and VR-based training in strengthening ethical decision-making abilities. These subjective perceptions align with the statistical findings from the three core survey questions (Table 8), therefore, reinforcing the conclusion that immersive learning enhances both measured competencies and students' confidence in their ethical reasoning skills (Kuhail et al., 2022).
Statistical analysis of learning outcomes
Paired sample statistics (control group)
A paired t-test was conducted to assess pre- and post-intervention changes in mean within the control group (Table 9). Results showed no statistically significant differences (p > 0.05) across all measured constructs, indicating that the conventional classroom-based instruction had a limited impact on engagement, perceived learning, or satisfaction with the feedback.
Table 9. Paired t-test results for control group
Criterion | t-value | p-value |
|---|---|---|
Interest in training materials and methods | − 0.49 | 0.630 |
Perceived ethical decision-making skills | 0.22 | 0.823 |
Satisfaction with feedback and evaluation | 0.00 | 1.000 |
Note. Significant level (p ≤ 0.05)
Paired sample statistics (experimental group)
In contrast to the control group statistics (Table 9), the experimental group demonstrated highly significant improvements across all three survey constructs (p < 0.001, Table 10), suggesting that AI and VR-enhanced training effectively increased student engagement, strengthened ethical reasoning skills, and improved satisfaction with the feedback received.
Table 10. Paired t-test results for experimental group
Criterion | t-value | p-value |
|---|---|---|
Interest in training materials and methods | − 12.38 | < 0.001 |
Perceived ethical decision-making skills | − 14.21 | < 0.001 |
Satisfaction with feedback and evaluation | − 15.09 | < 0.001 |
Significant level (p ≤ 0.05)
Independent t-test: post-intervention (control vs experimental)
To assess the overall impact of the instructional methods, an independent t-test compared post-intervention scores between control and experimental groups (Table 11). Results showed statistically significant differences favoring the experimental group (p < 0.001), with large Cohen’s d effect sizes, confirming the strong impact of AI and VR-based training on ethical decision-making.
Table 11. Independent t-test results post-intervention
Criterion | Control group (post) | Experimental group (post) | Mean Diff | Std. Dev | t-value | p-value | Cohen’s d |
|---|---|---|---|---|---|---|---|
Interest in training materials and methods | 2.33 | 3.93 | + 1.60 | 0.82 | − 10.42 | < 0.001 | 1.87 (Very High) |
Perceived ethical decision-making skills | 2.40 | 4.10 | + 1.70 | 0.78 | − 13.78 | < 0.001 | 2.15 (Very High) |
Satisfaction with feedback and evaluation | 2.07 | 4.20 | + 2.13 | 0.75 | − 14.92 | < 0.001 | 2.30 (Very High) |
Significant level (p ≤ 0.05)
The experimental group showed notable gains across all criteria, including a + 1.60 increase in interest, + 1.70 in perceived ethical decision-making skills, and + 2.13 in satisfaction with feedback—highlighting the effectiveness of immersive learning in enhancing engagement, competencies, and overall experience (Kuhail et al., 2022).
The results from Table 11 confirm that AI and VR-based instruction significantly outperformed traditional classroom learning across all ethical decision-making criteria. The large effect size (Cohen’s d = 1.87, p < 0.001) highlights the strong practical significance of immersive learning in strengthening ethical reasoning and decision-making confidence.
Specifically, the experimental group reported significantly higher engagement with training materials (t = − 10.42, p < 0.001), greater confidence in ethical reasoning (t = − 13.78, p < 0.001), and improved satisfaction with feedback (t = − 14.92, p < 0.001), as shown in Table 11. These results demonstrate that AI–VR simulations transform ethics education through dynamic, interactive, and personalized learning experiences. Prior research supports these findings, showing that VR enhances ethical reasoning via immersion and situational awareness (Czymoniewicz-Klippel & Cruz, 2023; Wong et al., 2020), while AI-driven feedback strengthens engagement and retention (Chamola et al., 2023; Slimi & Villarejo-Carballido, 2023).
Thus, these findings reinforce the integration of AI and VR into ethical curricula as scalable, effective tools for bridging theory and practice and supporting structured ethical deliberation (Cueva & Ochoa, 2024).
Qualitative feedback and thematic coding
To complement the quantitative findings, the students’ open-ended opinions and reflections were systematically analyzed to uncover deeper insights into their ethical learning experience with AI- and VR-based instruction. Thirty participants from the experimental group provided narrative feedback, which was examined using a five-phase thematic procedure grounded in established qualitative protocols (Braun & Clarke, 2006; Nowell et al., 2017): (1) transcription and familiarization, (2) initial coding, (3) theme generation, (4) independent validation, and (5) iterative refinement.
From this process, 80 meaningful units were extracted and organized into four central themes: experiential learning, empathy and perspective-taking, AI feedback and ethical reflection, and limitations and discomforts (Table 12). These themes capture both cognitive and emotional dimensions of the intervention and align with the ethical reasoning constructs assessed in the Quantitative analysis in Data analysis and results Section (see Tables 4 and 5).
Table 12. Themes from student responses and their quantitative links
Thematic category | Subthemes / descriptors | Representative quotes (student) | Frequency (n) | Quantitative dimension linked |
|---|---|---|---|---|
Experiential Learning | Simulation realism, cognitive immersion, impact of VR on realism | “The virtual simulation made me feel part of a real-life situation where every choice mattered.” | 26 | Consequence Anticipation |
Empathy and Perspective Taking | Emotional engagement, recognition of stakeholder perspectives | “Seeing the issue through another person’s lens made me question my assumptions.” | 21 | Social Impact Awareness |
AI Feedback and Ethical Reflection | Personalized prompts, detection of bias, structured ethical feedback | “The AI helped me recognize that my arguments lacked depth and pushed me to reconsider my stance.” | 19 | Evaluation of Alternatives |
Limitations and Discomforts | Technological discomfort, cognitive overload, time constraints | “I felt overwhelmed by the pace of the simulation; I couldn’t reflect properly on what was happening.” | 14 | Justification of Decisions |
To ensure analytical rigor, two faculty coders independently evaluated all responses. The rubric-derived scores were collapsed into four ordinal performance levels (Insufficient to Outstanding) and cross-tabulated in a 4 × 4 matrix. Cohen’s κ coefficient, computed using Python’s Cohen_kappa_score, reached 0.68, indicating substantial agreement (Landis & Koch, 1977) and aligning with qualitative reliability standards (O’Connor & Joffe, 2020).
Furthermore, these thematic insights complement the quantitative outcomes reported in Tables 7 and 8. For instance, students' emphasis on realism and ethical immersion corresponds with the observed gains in Consequence Anticipation, while the emergence of empathy-related comments is consistent with improvements in Perspective Taking and Contextualization. Observations on personalized feedback reinforce the value of the Evaluation of Alternatives dimension. Conversely, references to cognitive overload and time pressure offer possible explanations for the lower performance in Justification of Decisions.
The full coding and categorization process is illustrated in Fig. 2, which maps the progression from data familiarization to the integration of themes with the ethical competence framework guiding the intervention.
[See PDF for image]
Fig. 2
Thematic analysis process: from coding to conceptual integrations
It is noteworthy to mention that this visual synthesis (Fig. 2) reflects the same patterns identified in the Quantitative analysis in the Data analysis and results section, demonstrating how the AI–VR group achieved greater gains in engagement, ethical reasoning, and satisfaction. Student narratives highlighted increased motivation and immersion—echoing prior evidence on the motivational effects of virtual environments (He & Zhang, 2024; Merchant et al., 2014). Moreover, the adaptive, scenario-based feedback provided by AI enhanced students' confidence and aligns with the strong effect sizes reported in Tables 6 and 7. Nonetheless, more modest improvements in the Justification of Decisions dimension suggest that additional scaffolding for System 2 (structured analytic reasoning) processing remains necessary (Moya et al., 2024). Overall, the qualitative evidence reinforces the internal coherence of the instructional model and supports its relevance for blended, inquiry-based ethics education.
Discussion
Summary of findings
The findings of this study provide strong empirical support for the hypothesis that Artificial Intelligence (AI) and Virtual Reality (VR) enhance ethical decision-making skills in undergraduate students (see Data analysis section). The statistical analyses (Tables 4 and 5) indicate that the experimental group outperformed the control group across all assessed dimensions, confirming the effectiveness of AI and VR in reinforcing structured ethical reasoning.
The most significant improvements were observed in Consideration of Consequences (t = − 101.53, mean diff. = 23.27, p < 0.001) and Application of Ethical Principles (t = − 89.32, Mean Diff. = 20.83, p < 0.001), highlighting how VR’s realistic scenarios and AI’s adaptive feedback enable students to anticipate ethical consequences more effectively and apply ethical principles in complex decision-making contexts.
Figure 3 represents a comparative visualization of the post-intervention scores across ethical decision-making criteria, illustrating the substantial improvements observed in the control vs experimental group.
[See PDF for image]
Fig. 3
Post-intervention comparison of ethical decision-making criteria
These results align with prior studies on the cognitive and behavioral benefits of AI and VR in ethics education. Marengo and Pavan (2024) found that AI-driven training fosters higher-order cognitive processing, while Cabrera-Duffaut and Zubizarreta (2024) showed that VR enhances students’ ability to contextualize ethical dilemmas.
However, Justification of Decisions (t = − 38.43, mean diff. = 15.17, p < 0.001; see Table 5) exhibited lower gains, suggesting that structured deliberation requires additional instructional support. This aligns with Dual-Process Theories of Moral Judgment (Moya et al., 2024), which differentiate intuitive responses (System 1) from deliberative reasoning (System 2). Future research should explore how AI can strengthen structured argumentation in ethics training.
Moreover, Cohen’s effect size analysis (Table 7) highlights significant gains in Consideration of Consequences (d = 21.88), Evaluation of Alternatives (d = 20.28), and Contextualization of Cases (d = 11.98), reinforcing the role of immersive AI–VR environments in advancing ethical reasoning.
Figure 4 illustrates these improvements in VR-based training considering the control (traditional) vs experimental group (AI–VR based).
[See PDF for image]
Fig. 4
Post-intervention mean scores by criterion broken down by control vs experimental groups
Theoretical and practical implications of this study
Building upon the preceding insights, this section articulates how the findings of this study contribute to theoretical development and pedagogical practice in ethics education. It also outlines empirically grounded and forward-looking proposals aimed at guiding future research, curriculum innovation, and institutional implementation.
Theoretical implications
By demonstrating how AI-driven feedback and VR immersion jointly enhance consequence analysis, principle application, and stakeholder contextualization; this study advances ethical-decision research on multiple fronts. It empirically validates Experiential Learning Theory by showing that immersive, high-fidelity scenarios deepen moral engagement (Rest, 1986). It reinforces Constructivist Learning Theory by illustrating how students co-construct ethical meaning through context-rich interaction. It also extends AI-Driven Adaptive Learning by operationalizing personalized ethical guidance via real-time feedback loops (see Fig. 5).
[See PDF for image]
Fig. 5
Conceptual framework of theoretical and practical implications of AI and VR in ethics education
Moreover, the modest yet significant improvements in Justification of Decisions underscore the relevance of Cognitive Load Theory: while AI and VR reduce extraneous burden and facilitate engagement, deeper System 2 (structured analytic reasoning) processing may require additional scaffolding to support deliberate ethical reasoning. These findings also align with Dual-Process Theory of Moral Judgment (Moya et al., 2024; Rest, 1986), where intuitive (System 1) and analytical (System 2) thinking must be jointly cultivated. VR immersion triggers intuitive responses through embodied simulation, while AI mechanisms scaffold reflective judgment via adaptive prompts.
From a cognitive-science standpoint, the intervention reflects principles of embodied cognition, whereby sensorimotor immersion enhances schema activation and perspective-taking (Merchant et al., 2014). Simultaneously, adaptive feedback loops from the AI engine promote metacognitive regulation and calibration of task difficulty (Chamola et al., 2023). This synergy between embodiment and adaptivity contributes to a more nuanced understanding of ethical learning, supporting hybrid pedagogies that bridge affective engagement and analytical justification.
Practical implications
AI and VR represent a significant pedagogical advance in ethics education, bridging the gap between theory and applied moral reasoning (Rest, 1986), . Virtual simulations foster the transfer of ethical principles to real-world dilemmas, while adaptive AI feedback prompts deeper reflection (He & Zhang, 2024; Mergen & Kocak, 2024; Slimi & Villarejo-Carballido, 2023). Together, these technologies enhance ethical foresight, e.g., anticipating consequences, evaluating risks, and recognizing emerging moral tensions (Hidayat-Ur-Rehman, 2024).
Table 13 outlines the principal implications to include: greater student engagement, tailored instruction, and the institutional feasibility of immersive ethics training. These benefits, however, presuppose access to HMDs, high-bandwidth connectivity and secure cloud analytics; institutions lacking this immersion-ready infrastructure may pilot hybrid desktop–VR or AI-only modules before full adoption.
Table 13. Practical implications of AI and VR in ethics education
Practical implication | Description | Supporting literature |
|---|---|---|
Bridging the Theory–Practice Gap | VR provides interactive, real-world ethical dilemmas that traditional case studies lack. This enhances students' ability to apply theoretical concepts to practical scenarios | He and Zhang (2024), Mergen & Kocak (2024) |
Personalized and Adaptive Learning | AI-driven feedback tailors learning experiences to individual decision-making patterns, improving engagement and reinforcing long-term ethical reasoning | Slimi and Villarejo-Carballido (2023) |
Ethical Foresight and Risk Awareness | AI-generated ethical dilemmas expose students to dynamic, real-time risk scenarios, improving anticipatory judgment and ethical sensitivity | Hidayat-Ur-Rehman (2024) |
Scalability and Institutional Integration | AI and VR can scale ethics education while increasing accessibility, particularly in resource-limited environments. Although initial costs may be high, long-term benefits justify their adoption | Cueva and Ochoa (2024), Alharbi (2024) |
In summary, considering the scalability and long-term benefits, AI and VR provide scalable responses to the challenges of quality, equity, and personalization in ethics education. Although initial investments may be significant, their long-term benefits, particularly in under-resourced contexts, justify institutional adoption (Cueva & Ochoa, 2024). Immersive environments promote iterative engagement with complex dilemmas, enhancing ethical reasoning through contextualized practice, especially vital in professions where moral judgment underpins performance (Rest, 1986), .
AI personalizes instruction to each learner (Chamola et al., 2023), and VR delivers uniform experiences, reducing instructor variability (Sombilon et al., 2024). Immersive tools also boost motivation and prolong ethical-principle retention (Jia & Qi, 2023; Huang et al., 2021). In today’s evolving education, immersive technologies are now integral and essential for producing ethically competent professionals capable of navigating complex ethical dilemmas.
Limitations and future directions for research
Persistent challenges
The immersive-technology ethics training continues to face four structural obstacles: (1) limited ecological validity, (2) algorithmic bias, (3) a scarcity of longitudinal evidence, and (4) disciplinary silos. Simulations seldom capture the ambiguity and time pressure of real practice (Oliveira et al., 2024); culturally narrow datasets can skew feedback (Cotič et al., 2024); short-term assessments dominate the literature (Setyawarno et al., 2024); and isolated disciplinary efforts hinder integrative design (Varas et al., 2023).
Study limitations
Although the present experiment demonstrates large short-term gains, several factors may circumscribe its generalizability. The sample is modest (n = 60) and drawn from a single undergraduate cohort, which may potentially restrict external validity; multi-institutional replications are warranted. Very low within-group dispersion (SD ≈ 1 on a 0–100 scale) may result in high effect sizes, therefore, calling for more careful interpretations in practice. The one-week post-test window may preclude the study’s claims about behavioral durability, and the focus on business and engineering omits domains, such as medicine or law, with distinct normative logics. Other studies can consider replicating the method of this study in the different domain-specifics to help generalize and/or validate the outcome of this study. In addition, AI-generated feedback may inherit biases from its training corpus, emphasizing the need for transparent model auditing and culturally responsive prompting. Finally, the novelty of HDM may potentially induce a short-lived motivational boost when using the immersive technologies.
Future directions for research
To narrow the above evidence and gaps, future and forthcoming research should:
Develop scalable AI–VR frameworks that can be tailored to discipline-specific dilemmas while preserving theoretical coherence (Schuett, 2023).
Implement bias safeguards by pairing diverse training data with systematic fairness audits and explainable-AI routines (Almusaed et al., 2023).
Conduct longitudinal studies that track retention and transfer to professional settings through multi-wave follow-ups and mixed-methods triangulation (Aharoni et al., 2024).
Foster cross-disciplinary collaboration among ethicists, learning scientists, and AI engineers to co-design hybrid pedagogies that reflect varied normative frameworks (Han, 2025).
Key lines of actions and inquiry could include evaluating hybrid desktop–VR or AI-only variants in resource-constrained settings, performing cost-effectiveness analyses, and incorporating behavioral or neurophysiological metrics to validate rubric-based gains.
Addressing the four future directions stated above will strengthen external validity, mitigate bias, and build the longitudinal evidence base needed for best-practice design. Ultimately, well-calibrated AI and VR platforms can align ethics curricula (Sari et al., 2021) with authentic decision contexts by blending theory-informed feedback and immersive practice (Wilbanks et al., 2024; Zhang & Zhang, 2023).
By taking into consideration these solutions, limitations, and future directions will reinforce both the pedagogical impact and socio-technical implications of developing interdisciplinary, empirically grounded, and culturally responsive models for ethics education. As the capabilities of AI and VR continue to evolve, their pedagogical deployment must remain aligned with the ethical, contextual, and professional demands of learners across disciplines.
Conclusion
This study explored the effectiveness of AI and VR technologies in enhancing ethical decision-making skills among undergraduate students, especially in business and engineering. The findings demonstrate that immersive technologies significantly improve multiple dimensions of ethical reasoning, with notable gains in Consideration of Consequences, Contextualization of Cases, and Application of Ethical Principles. These results validate the hypothesis that AI and VR-based learning environments offer a powerful framework for developing critical ethical competencies, effectively bridging theoretical knowledge with real-world applications.
Beyond improving ethical reasoning, AI and VR present broader implications for ethics education. Their capacity to foster engagement, provide adaptive learning, and simulate complex decision-making scenarios highlights their transformative potential in curriculum design. Empirical evidence from this study supports the integration of AI-driven adaptive systems and VR-based simulations into ethics training to promote deeper moral reasoning and structured decision-making.
However, to fully realize this potential, institutions must address challenges related to scalability, faculty training, and curricular integration. Future research should explore multi-modal learning models that combine AI-driven deliberation with VR experiential learning to optimize educational outcomes. Additionally, studies should examine long-term retention and real-world application of these skills, as well as best practices for embedding AI and VR into diverse educational contexts. Expanding such training (AI- and VR-based) beyond business and engineering into fields such as medicine, law, and social sciences, will offer further insights into the cross-disciplinary value of the immersive technologies (AI and VR) in ethics education.
Acknowledgements
The authors would like to thank the AI Summit for supporting the software license acquisition; Tecnologico de Monterrey for logistical support with VR equipment and classroom access; and the Writing Lab (Institute for the Future of Education, Tecnologico de Monterrey) for the technical and financial support in the publication of this work. We also thank the anonymous reviewers and the editorial team for their constructive comments and feedback, which helped to improve the scientific impact and rigor of this study.
Use of large language models (LLMs)
The study used LLM‑assisted editor solely for language polishing (grammar, clarity, and consistency). All scientific content, analyses, interpretations, and conclusions were conceived, conducted, and verified by human authors, who take full responsibility for the work.
Author contributions
RGT: conceived and designed the study; conducted statistical analyses; and contributed to data interpretation. JAGL and MLMT: implemented the intervention and contributed to interpretation. JAR: participated in the intervention and coded the assessment rubrics. GMB: contributed to study design and manuscript revisions. KO: structured and guided the theoretical framework and methodology of the study, critically reviewed and supervised the study and results presentation. All authors have read and approved the final manuscript.
Funding
Publication charges were supported by the Writing Lab, Institute for the Future of Education, Tecnologico de Monterrey. No specific grant number applies.
Data availability
The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate
The study uses fully anonymous and publicly available data. Apart from a formal request for data use describing the intended analyses, no ethical approval was needed.
Competing interests
The authors declare no competing interests.
Abbreviations
Artificial intelligence
Confidence interval
Cohen’s d (effect size)
Hedges’ g (effect size)
Head-mounted display
Learning-oriented assessment
Standard deviation
Virtual reality
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Aharoni, E; Fernandes, S; Brady, D; Alexander, C; Criner, M; Queen, K; Crespo, V. Attributions toward artificial agents in a modified moral Turing test. Scientific Reports; 2024; 14, 58087. [DOI: https://dx.doi.org/10.1038/s41598-024-58087-7]
AlAli, R; Wardat, Y. The role of virtual reality (VR) as a learning tool in the classroom. International Journal of Religion; 2024; 5,
AlGerafi, MAM; Zhou, Y; Oubibi, M; Wijaya, TT. Unlocking the potential: A comprehensive evaluation of augmented reality and virtual reality in education. Electronics; 2023; 12,
Almusaed, A; Almssad, A; Yitmen, İ; Homod, R. Enhancing student engagement: Harnessing AIED’s power in hybrid education—A review analysis. Education Sciences; 2023; 13,
Andrade, HL; Cizek, GJ. Handbook of formative assessment; 2010; Routledge: [DOI: https://dx.doi.org/10.4324/9780203874851]
Angel-Urdinola, D., Castillo, C., & Hoyos, A. (2022). How can virtual reality improve education and training? | World Economic Forum. https://www.weforum.org/agenda/2021/05/virtual-reality-simulators-develop-students-skills-education-training
Association of American Colleges and Universities. (2009). VALUE rubric for ethical reasoning. https://www.aacu.org/initiatives/value-initiative/value-rubrics/value-rubrics-ethical-reasoning Accessed 18 Dec 2024.
Baucells, M; Katsikopoulos, KV. Descriptive models of decision making; 2011; John Wiley & Sons: [DOI: https://dx.doi.org/10.1002/9780470400531.eorms0249]
Braun, V; Clarke, V. Using thematic analysis in psychology. Qualitative Research in Psychology; 2006; 3,
Cabrera-Duffaut, M; Zubizarreta, J. Immersive learning platforms: Analyzing virtual reality’s contribution to competence development in higher education—A systematic literature review. Frontiers in Education; 2024; 9, 1391560. [DOI: https://dx.doi.org/10.3389/feduc.2024.1391560]
Çakmak, F; Ismail, SM; Karami, S. Advancing learning-oriented assessment (LOA): Mapping the role of self-assessment, academic resilience, academic motivation in students’ test-taking skills, and test anxiety management in Telegram-assisted language learning. Language Testing in Asia; 2023; 13,
Carless, D. Learning-oriented assessment: Conceptual bases and practical implications. Innovations in Education and Teaching International; 2007; 44,
Chamola, V; Hassija, V; Singh, A; Mittal, U; Pareek, R; Mangal, P; Brown, D. Metaverse for education: Developments, challenges and future direction. Preprints; 2023; [DOI: https://dx.doi.org/10.20944/preprints202308.1872.v1]
Chandler, J. (2017). Descriptive decision theory. In The Stanford encyclopedia of philosophy (Winter 2017 ed.). https://plato.stanford.edu/entries/decision-theory-descriptive/ Accessed 10 Dec 2024.
Cheung, V; Maier, M; Lieder, F. Large language models amplify human biases in moral decision-making. OSF Preprints; 2024; [DOI: https://dx.doi.org/10.31234/osf.io/aj46b]
Conrad, M; Kablitz, D; Schumann, S. Learning effectiveness of immersive virtual reality in education and training: A systematic review of findings. Computers & Education: X Reality; 2024; 4, [DOI: https://dx.doi.org/10.1016/j.cexr.2024.100053] 100053.
Cotič, M; Doz, D; Jenko, M; Žakelj, A. Mathematics education: What was it, what is it, and what will it be?. International Electronic Journal of Mathematics Education; 2024; 19,
Cueva, A., & Ochoa, J. (2024). Artificial intelligence (AI) integration in rural Philippine higher education: Perspectives, challenges, and ethical considerations. OSF Preprints, 1–15. https://doi.org/10.31219/osf.io/ehcb9
Czymoniewicz-Klippel, M; Cruz, L. Engagement of online biobehavioral health students in ethics education through virtual immersive experiences. Pedagogy in Health Promotion; 2023; 9,
Davidson, P; Coombe, C. Practical applications of learning-oriented assessment (LOA). Local research and glocal perspectives in English language teaching; 2022; Springer:
Falebita, OS; Kok, PJ. Artificial intelligence tools usage: A structural equation modeling of undergraduates’ technological readiness, self-efficacy and attitudes. Journal for STEM Education Research; 2025; 8, pp. 257-282. [DOI: https://dx.doi.org/10.1007/s41979-024-00132-1]
González, J; Melgoza, E; Cabeza, L; Okoye, K. Assessment of students’ learning outcomes and competency through a blend of knowledge and practical ability. International Journal of Instruction; 2024; 17,
Han, H. Why do we need to employ exemplars in moral education? Insights from recent advances in research on artificial intelligence. Ethics & Behavior; 2025; 35,
He, Y; Zhang, J. Enhancing medical education for undergraduates: Integrating virtual reality and case-based learning for shoulder joint. BMC Medical Education; 2024; 24, 6103. [DOI: https://dx.doi.org/10.1186/s12909-024-06103-9]
Hedges, LV; Olkin, I. Statistical methods for meta-analysis; 1985; Academic Press:
Hidayat-Ur-Rehman, M. Examining AI competence, chatbot use and perceived autonomy as drivers of students’ engagement in informal digital learning. Journal of Research in Innovative Teaching & Learning; 2024; 17,
Hollaender, G; Peisachovich, E; Kapralos, B; Culver, C; Silva, C; Dubrowski, A. Augmented reality education experience (AReDeX): An augmented reality experience and experiential education medium to teach empathy to healthcare providers and caregivers of persons living with dementia. Cureus; 2023; 15,
Huang, W; Roscoe, R; Johnson-Glenberg, M; Craig, S. Motivation, engagement, and performance across multiple virtual reality sessions and levels of immersion. Journal of Computer Assisted Learning; 2021; 37,
Jia, Y; Qi, R. Influence of an immersive virtual environment on learning effect and learning experience. International Journal of Emerging Technologies in Learning; 2023; 18,
Kahneman, D. Thinking, fast and slow; 2011; Farrar:
Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice-Hall.
Kuhail, M; ElSayary, A; Farooq, S; Alghamdi, A. Exploring immersive learning experiences: A survey. Informatics; 2022; 9,
Landis, JR; Koch, GG. The measurement of observer agreement for categorical data. Biometrics; 1977; 33,
Lim, WM; O’Connor, P; Nair, S; Soleimani, S; Rasul, T. A foundational theory of ethical decision-making: The case of marketing professionals. Journal of Business Research; 2023; 158, 113579. [DOI: https://dx.doi.org/10.1016/j.jbusres.2022.113579]
Marengo, D; Pavan, A. The educational value of artificial intelligence in higher education. Interactive Technology and Smart Education; 2024; [DOI: https://dx.doi.org/10.1108/itse-11-2023-0218]
Merchant, Z; Goetz, ET; Cifuentes, L; Keeney-Kennicutt, W; Davis, TJ. Effectiveness of virtual reality-based instruction on students’ learning outcomes in K–12 and higher education: A meta-analysis. Computers & Education; 2014; 70, pp. 29-40. [DOI: https://dx.doi.org/10.1016/j.compedu.2013.07.033]
Mergen, M; Koçak, A. Reviewing the current state of virtual reality integration in medical education—A scoping review. BMC Medical Education; 2024; 24, 788. [DOI: https://dx.doi.org/10.1186/s12909-024-05777-5]
Morris, SB. Estimating effect sizes from pretest–posttest–control group designs. Organizational Research Methods; 2008; 11,
Moya, B; Eaton, S; Pethrick, H; Hayden, A; Brennan, R; Wiens, J; McDermott, B. Academic Integrity and Artificial Intelligence in Higher Education (HE) Contexts: A Rapid Scoping Review. Canadian Perspectives on Academic Integrity; 2024; 7,
Nam, BH; Bai, Q. ChatGPT and its ethical implications for STEM research and higher education: A media discourse analysis. International Journal of STEM Education; 2023; 10, 66. [DOI: https://dx.doi.org/10.1186/s40594-023-00452-5]
Nieto, Y; García-Díaz, V; Montenegro, C; Crespo, RG. Supporting academic decision making at higher educational institutions using machine learning-based algorithms. Soft Computing; 2019; 23,
O’Connor, C; Joffe, H. Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods; 2020; 19, pp. 1-13. [DOI: https://dx.doi.org/10.1177/1609406919899220]
Oliveira, M; Brands, J; Mashudi, J; Liefooghe, B; Hortensius, R. Perceptions of artificial intelligence system’s aptitude to judge morality and competence amidst the rise of chatbots. Cognitive Research; 2024; 9,
Piaget, J. (1971). Biology and knowledge: An essay on the relations between organic regulations and cognitive processes (M. Gabain, Trans.). University of Chicago Press. (Original work published 1967)
Python Software Foundation. (2023). Python programming language. https://www.python.org/ Accessed 15 Jan 2025.
Rashid, S; Khattak, A; Ashiq, M; Ur Rehman, S; Rashid Rasool, M. Educational landscape of virtual reality in higher education: Bibliometric evidences of publishing patterns and emerging trends. Publications; 2021; 9,
Rest, J. R. (1986). Moral development: Advances in research and theory. Praeger.
Sari, R; Warsono, S; Ratmono, D; Zuhrohtun, Z; Hermawan, H. The effectiveness of teaching virtual reality-based business ethics: Is it really suitable for all learning styles?. Interactive Technology and Smart Education; 2021; 20,
Sawilowsky, SS. New effect size rules of thumb. Journal of Modern Applied Statistical Methods; 2009; 8,
Schicktanz, S; Welsch, J; Schweda, M; Hein, A; Rieger, J; Kirste, T. AI-assisted ethics? Considerations of AI simulation for the ethical assessment and design of assistive technologies. Frontiers in Genetics; 2023; 14, 1039839. [DOI: https://dx.doi.org/10.3389/fgene.2023.1039839]
Schuering, B; Schmid, T. What Can Computers Do Now? Dreyfus Revisited for the Third Wave of Artificial Intelligence. Proceedings of the AAAI Symposium Series; 2024; 3,
Schuett, J. Three lines of defense against risks from AI. AI and SocieTy; 2023; [DOI: https://dx.doi.org/10.1007/s00146-023-01811-0]
Setyawarno, D; Rosana, D; Widodo, E; Maryati, M; Rahayu, D. The impact of hybrid model science practicum based on IoT and VR on prospective science teacher students’ creative thinking skills. International Journal of Innovative Research and Scientific Studies; 2024; 7,
Shin, J., Kim, H., & Amaral, M. (2023). Trusting the moral judgments of a robot: Perceived moral competence and human-likeness of a GPT-3-enabled AI. Proceedings of the 56th Hawaii International Conference on System Sciences (HICSS-56). https://doi.org/10.24251/hicss.2023.063
Slimi, A; Villarejo-Carballido, B. Systematic review: AI’s impact on higher education—Learning, teaching, and career opportunities. TEM Journal; 2023; 12,
Sombilon, E; Rahmanov, S; Jachecki, K; Rahmanov, Z; Peisachovich, E. Ethical considerations when designing and implementing immersive realities in nursing education. Cureus; 2024; 16,
Suguna, SK; Dhivya, M; Paiva, S. Artificial intelligence (AI). CRC Press; 2021; [DOI: https://dx.doi.org/10.1201/9781003005629]
Sweller, J; van Merriënboer, JJG; Paas, F. Cognitive architecture and instructional design: 20 years later. Educational Psychology Review; 2019; 31,
Uddin, M. A review of utilizing natural language processing and AI for advanced data visualization in real-time analytics. International Journal of Management Information Systems and Data Science; 2024; 1,
Uriarte-Portillo, A; Ibáñez, M; Zataraín-Cabada, R; Estrada, M. Higher immersive profiles improve learning outcomes in augmented reality learning environments. Information; 2022; 13,
Usher, M; Barak, M. Unpacking the role of AI ethics online education for science and engineering students. International Journal of STEM Education; 2024; 11,
Varas, D; Santana, M; Nussbaum, M; Claro, S; Imbarack, P. Teachers’ strategies and challenges in teaching 21st centuryskills: Little common understanding. Thinking Skills and Creativity; 2023; 48, 101289. [DOI: https://dx.doi.org/10.1016/j.tsc.2023.101289]
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.
Wibisono, FC; Wahyudin, D; Yogiarni, T. The Effect of Gamification Learning Model to Improve Students’ Critical Thinking Skills in Elementary School Science and Social Subjects. Educational Studies and Research Journal; 2024; 1,
Wilbanks, D; Mondal, D; Tandon, N; Gray, K. Large language models as moral experts? GPT-4o outperforms expert ethicist in providing moral guidance. OSF Preprints; 2024; [DOI: https://dx.doi.org/10.31234/osf.io/w7236]
Wong, J; Yip, C; Yong, S; Chan, A; Kok, S; Lau, T. BIM–VR framework for building information modelling in engineering education. International Journal of Interactive Mobile Technologies; 2020; 14,
Wong, S; Yeung, P; Choi, J. Adaptive AI-driven learning environments for ethical training. Educational Psychology Review; 2024; 36,
World Economic Forum. (2021). How can virtual reality improve education and training? https://www.weforum.org/agenda/2021/05/virtual-reality-simulators-develop-students-skills-education-training/ Accessed 10 Jan 2025.
Zhang, J; Zhang, Z. Ethics and governance of trustworthy medical artificial intelligence. BMC Medical Informatics and Decision Making; 2023; 23,
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.