Content area
This study develops a human-centered design (HCD) approach to create a GenAI trainer that addresses critical gaps in virtual teamwork training for engineering students. While virtual teamwork competency is increasingly essential, current programs often prioritize task completion over competency development. Leveraging generative AI's capabilities for personalized interaction, scenario simulation, and tailored feedback, we employ a three-phase HCD approach: (1) identifying unmet user needs through stakeholder interviews, revealing key challenges in instructional support, training formats, feedback mechanisms, and teamwork awareness; (2) co-designing solutions with instructors and students to create an AI trainer that combines Socratic questioning and scenario-based learning; and (3) testing the system and obtaining feedback from engineering students. Results demonstrate significant improvements across multiple dimensions: transforming passive learning into active experiences, delivering real-time actionable feedback, enhancing conceptual understanding and awareness of virtual teamwork, and developing practical virtual teamwork skills through authentic scenarios. Participant feedback also identified future improvements for enhanced personalization and immersion. This study contributes both theoretically and practically by illustrating how HCD can effectively integrate AI capabilities with pedagogical needs, while providing a replicable model for developing competency-based training tools that balance technological innovation with educational effectiveness.
Introduction
The rapid evolution of digital collaboration tools, coupled with the increasing prevalence of remote work, has made virtual teamwork competency an essential skill (Linnes, 2020; Wei et al., 2024). Despite its growing importance, higher education institutions currently offer insufficient training programs targeting this competency, with existing initiatives predominantly emphasizing group task completion over the cultivation of collaboration skills (Hu & Chan, 2024; Myers et al., 2014). This pedagogical gap leaves students inadequately prepared to navigate the complexities inherent in virtual teamwork environments. Emerging generative AI (GenAI) technologies present a promising solution to this challenge due to their unique capabilities in personalizing learning experiences, providing adaptive feedback, and simulating realistic collaboration scenarios (Chan & Colloton, 2024; Chan & Hu, 2023). However, their application in education also raises several concerns, including the potential over-reliance on technological solutions at the expense of pedagogical principles, the risk of diminishing human interaction in learning processes, and challenges in ensuring the quality and relevance of AI-generated content (Adel et al., 2024; Darling et al., 2024).
To balance these opportunities and challenges while addressing the pedagogical needs of engineering education, this study adopts a human-centered design approach to develop a GenAI-powered virtual teamwork competency training system in collaboration with engineering educators and students. By demonstrating how human-centered methodologies can bridge technological innovation with authentic educational requirements, this research advances the discourse on responsible AI implementation in education. Additionally, the focus on virtual teamwork competency training for engineering students not only addresses a timely educational need but also provides a replicable model for integrating AI technologies into competency-based curricula while prioritizing user-centric design.
Literature review
Human-centered design in higher education
Human-centered design has emerged as a transformative paradigm in higher education, shifting the focus from “designing for users” to “designing with users” (AlZoubi et al., 2024; Sanders & Stappers, 2008). By involving students, faculty, and other stakeholders in the design process, HCD ensures that educational tools are both functional and aligned with real-world needs, enhancing usability and user experience (Rodríguez-Ortiz et al., 2024). Active stakeholder involvement also fosters collaboration and a sense of ownership, which in turn increases engagement and adoption rates (Romero, 2024).
HCD has been widely applied across various domains in higher education, including curriculum design, student support services, and digital learning environments (Garreta-Domingo et al., 2018; Kong & Yang, 2024). However, its most significant impact has been on the design of educational tools. For example, AlZoubi et al. (2024) employed HCD principles to develop instructor dashboards that integrate technological potentials with pedagogical features, demonstrating how stakeholder involvement and pedagogical alignment can enhance tool effectiveness. Similarly, Schmidt et al. (2024) applied HCD in Project PHOENIX, where they collaboratively designed inclusive extended reality (XR) tools with autistic users through iterative prototyping, demonstrating how HCD can transform neurodiverse learning experiences.
The rise of generative AI (GenAI) tools in education has further highlighted the importance of HCD. While these tools hold immense potential, they often face challenges related to usability, interpretability, and alignment with educational goals (Chan & Colloton, 2024; Giannakos et al., 2024). The HCD methodology effectively addresses these issues by systematically incorporating educators' and students' diverse perspectives throughout the development process. As evidenced by Alfredo et al.'s (2024) systematic review, this inclusive approach leads to better alignment with authentic educational challenges, ultimately yielding more reliable and contextually appropriate solutions. This principle is exemplified in AI-powered learning analytics platforms that employ HCD to co-develop feedback mechanisms with end-users, significantly enhancing the practical value of data-driven insights (Kloos et al., 2022). These findings collectively underscore how HCD serves as a vital framework for developing GenAI tools that are not only technologically sophisticated but also educationally meaningful, inclusive, and responsive to the varied requirements of higher education institutions.
Key steps in human-centered design
Human-centered design (HCD) employs a systematic methodology to develop solutions that effectively address user needs. Two predominant frameworks guide this process: the Stanford Five-Phase Design Thinking Model (e.g. Ingram et al., 2022; Tu et al., 2018) and the IDEO Framework (e.g., Alfaro Arias et al., 2020; Skywark et al., 2022). The Stanford model offers a comprehensive five-phase structure that guides designers through an iterative process of problem-solving (Stanford University d.school, 2018). Beginning with empathy-building exercises to gain profound insights into user experiences, the model then progresses to problem definition, where researchers synthesize their observations to frame the core challenge. Subsequent phases facilitate creative ideation, rapid prototyping, and rigorous user testing. The IDEO framework (IDEO, 2015), specifically developed for HCD, presents a streamlined three-phase approach that distinguishes itself from other models through its strong emphasis on practical implementation. It begins with Inspiration, where deep user insights are gathered through ethnographic techniques, ensuring solutions are grounded in real-world needs. This transitions into Ideation, a highly collaborative phase that leverages diverse perspectives through structured brainstorming sessions and co-design workshops with stakeholders to generate innovative concepts. The final Implementation phase focuses on rapid prototyping and refinement.
In educational technology research, scholars have developed specialized adaptations of these approaches. Dimitriadis et al. (2021) established three core HCD principles for learning analytics solutions: stakeholder empowerment throughout the design process, integration with the learning design cycle, and grounding in educational theories. Similarly, Mohseni et al. (2023) proposed a streamlined workflow for designing Learning Analytics Dashboards that begins with comprehensive requirements analysis, progresses through iterative idea generation and testing, and concludes with user feedback-informed prototyping. However, while existing frameworks are valuable for designing educational tools, they exhibit limited applicability in instructional design contexts. This limitation is particularly relevant to our investigation, as we aim to develop tools that not only demonstrate usability but also actively enhance instructional processes and significantly improve learning outcomes.
To address this gap, we integrate the IDEO framework’s human-centered approach—comprising Inspiration, Ideation, and Implementation—with the systematic ADDIE instructional design model (Branch, 2009), which provides a research-based methodology for developing effective learning interventions. ADDIE’s structured stages—Analysis (identifying performance gaps), Design (formulating an instructional plan with measurable learning objectives), Development (creating and validating learning resources), Implementation (establishing learning environments and delivering instruction), and Evaluation (assessing the efficacy of instructional materials and methods)—provide the missing pedagogical scaffolding. This hybrid approach ensures solutions are both user-validated and instructionally robust.
Our synthesized framework (as shown in Fig. 1) initiates with identifying users’ unmet needs, which differ from abstract general needs in that they arise from identifiable deficiencies in current practices and the root causes of performance gaps (Triantafyllakos et al., 2008). These needs represent tangible opportunities for meaningful educational interventions, providing designers with targeted inspiration for solution development. The subsequent phase involves co-designing and developing solutions through active engagement with end-users—including students, faculty, and other stakeholders—to foster a collaborative environment that enhances the relevance and practicality of outcomes (Heiner et al., 2023; Mohseni et al., 2023). Once the design is finalized, the framework proceeds with developing learning resources for implementation. The final stage focuses on testing and obtaining feedback through first conducting structured user testing, and second systematically collecting feedback to validate the design's efficacy and alignment with educational objectives—an approach that has proven particularly useful for early-stage tool development (Følstad & Knutsen, 2010).
[See PDF for image]
Fig. 1
Three-phase HCD model for EdTech development
Current virtual teamwork competency development programs and challenges
Virtual teamwork refers to the collaboration of team members who are geographically dispersed and often work across different time zones and organizational structures, facilitated by web-based communication technologies (Lipnack & Stamps, 2008). It has become increasingly important for engineering students, particularly following the notable rise in virtual instruction catalyzed by the COVID-19 pandemic (Wei et al., 2024). Parallel to this shift, remote collaboration in professional contexts has also become more common, as modern work environments demand effective collaboration in remote and globalized settings (Linnes, 2020). However, while the need for virtual teamwork competency has increased, research indicates that many students—particularly in engineering—struggle with key aspects of remote collaboration. Common challenges include unequal task distribution, communication breakdowns, disengagement, and weak accountability, all of which hinder team productivity and satisfaction (Ikonen et al., 2015). These issues highlight a critical gap in students’ preparedness for modern work settings, necessitating structured development of virtual teamwork competency in higher education.
Effective virtual collaboration requires not only technical expertise but also behavioral and interpersonal competencies, such as communication, conflict resolution, and task coordination (Schulze & Krumm, 2017). This multidimensional nature of virtual teamwork is further elucidated by Hu and Chan’s (2025) behavior-oriented framework, which identifies 15 observable behavioral indicators across three dimensions: Group Task (e.g., task analysis, labor division), Social (e.g., mutual support, conflict management), and Individual Task (e.g., on-time task completion, quality assurance). Recognizing the importance of these skills, universities have begun piloting targeted training programs. Two dominant pedagogical approaches emerge from literature. First, some programs employ explicit instruction through structured training sessions, blending synchronous (e.g., video conferencing) and asynchronous (e.g., shared documents, discussion forums) modalities. These interventions are designed to deepen students’ understanding of virtual teamwork and enhance their competency perceptions (Kelly et al., 2022; Wang & Rasmussen, 2020). A second, more experiential approach involves scaffolded team projects embedded within online or hybrid courses. Such projects integrate cognitive, metacognitive, and social support tailored to virtual settings. For instance, Pazos et al. (2016) implemented instructional scaffolds focusing on planning, goal-setting, and progress monitoring for distributed engineering teams. These scaffolds are typically paired with web-based collaboration tools or open-access technologies, often incorporating structured team charters, defined roles, and iterative reflective activities (Dincă et al., 2023; Dumond, 2022). To reinforce these strategies, many programs incorporate process-oriented feedback cycles, such as repeated team evaluations and guided reflections. Empirical studies suggest these cycles improve team cohesion and process quality by allowing iterative adjustments (Croy & Eva, 2018; Myers et al., 2014).
However, significant challenges persist in virtual teamwork competency development. A primary concern is the predominant focus on task completion rather than the explicit development of virtual teamwork skills, which fails to address core skill requirements in virtual environments (Myers et al., 2014). This deficiency is exacerbated by an over-reliance on technological solutions without adequate pedagogical support, which may result in superficial engagement rather than the meaningful acquisition of teamwork skills (Hu & Chan, 2024). Another persistent challenge is students' difficulty in navigating virtual conflicts, where the absence of scaffolding leaves students unequipped to navigate virtual disagreements effectively (Gutiérrez et al., 2022). Assessment also remains a critical concern, as many programs rely on self-reports and peer evaluations, which can be biased or inconsistent (Griesbaum & Gortz, 2010).
AI-driven methods for virtual teamwork competency development
With the rise of GenAI, scholars have begun exploring its application in virtual teamwork competency development. Current research demonstrates how AI's core capabilities—such as real-time data analytics, natural language processing (NLP), and personalized feedback—can significantly enhance online collaborative learning experiences. Firstly, AI facilitates virtual teamwork through collaborative workspaces and real-time tools that optimize project execution and group problem-solving, while also scaffolding structured teamwork processes (Liu et al., 2024; Pazos & Magpili Smith, 2015). Concurrently, it can provide ongoing assistance by responding to participant inquiries and facilitating navigation of collaborative materials (Nagy et al., 2024). Beyond process facilitation, AI leverages behavioral and skill-based data to recommend optimal team compositions, replacing subjective assessments with data-driven insights derived from historical performance and personality matching (Fatani & Banjar, 2024). NLP can further enhance collaboration by analyzing discussion logs, extracting key insights, and generating summaries to reduce misunderstandings (Hao & Cukurova, 2023; Sullivan & Keith, 2019).
Despite these technological advancements, significant research gaps persist in AI-supported teamwork competency development. Current AI applications predominantly emphasize facilitating online collaboration while neglecting the crucial aspect of cultivating fundamental teamwork competencies (e.g., Schmutz et al., 2024). For instance, students have few opportunities to develop critical skills in problem-solving, conflict resolution, and adaptive communication in challenging situations. Moreover, existing systems fail to provide individualized teamwork skill training and lack comprehensive feedback mechanisms, leaving participants without clear guidance on how to improve their virtual teamwork competency after team tasks (Hu & Chan, 2024). Compounding these issues, the development of most AI tools follows a developer-driven rather than user-centered approach, which often results in a significant disconnect between technological capabilities and the actual needs of educators and learners, potentially limiting the tools' effectiveness in real educational settings (Friedrich et al., 2024).
This study seeks to bridge these research gaps through a human-centered design approach to co-develop a generative AI-powered virtual teamwork trainer in collaboration with engineering educators and students. The research process systematically progresses from identifying current training challenges through stakeholder-driven co-design to evaluating tool effectiveness via student feedback. Three core research questions guide this investigation:
What specific challenges hinder engineering students' development of virtual teamwork competencies in current training environments?
How can multi-stakeholder collaboration (including educators, students, and researchers) inform the design of a GenAI-based teamwork training tool?
What are students' perceptions of the HCD-developed GenAI virtual teamwork competency trainer, including their learning experiences and suggestions for improvement?
Methodology
This study adopts a qualitative design research approach to develop a GenAI chatbot for virtual teamwork competency training, following three Human-Centered Design phases (see Fig. 1): identifying and specifying user needs, ideating and co-designing solutions, and testing and obtaining feedback. In the first phase, semi-structured interviews were conducted with teachers and undergraduate students from the Faculty of Engineering to uncover challenges faced in current online teamwork training (addressing RQ1). Phase 2 involved collaborative ideation process where researchers and participants jointly brainstormed and refined potential solutions to the identified challenges, followed by prototype development using an online GenAI application platform to transform these co-designed concepts into functional features (addressing RQ2). Phase 3 engaged students in testing the prototype while providing reflective feedback, with their responses analyzed to assess user perceptions and identify potential enhancements for the GenAI virtual teamwork competency trainer (addressing RQ3). This study was conducted in accordance with the ethical standards of the institutional research committee and received formal approval (No. EA240073). All participants provided informed consent after being fully informed about the study purpose and procedures.
Participants
The study involved two groups of participants: engineering teachers and undergraduates, all from the Faculty of Engineering at a university of science and technology in northern China. Teachers were recruited for the first two phases of the research through purposive sampling. Eligibility criteria included at least 5 years of teaching experience and prior experience assigning online collaborative tasks to students. A total of 10 teachers participated, including 8 males and 2 females, with teaching experience ranging from 8 to 40 years. These teachers were from disciplines such as Chemical Engineering and Technology, Mechanical Engineering, and Automation.
Undergraduate students were recruited from the same faculty through public platform announcements. For the first two phases, participants were required to be in their third or fourth year and have prior experience with online collaboration. A total of 40 students participated, including 32 males and 8 females, with 18 being third-year students and 22 being fourth-year students. These students were primarily from disciplines such as Electronic Information Engineering, Electrical Engineering and Automation, Mechanical Manufacturing and Automation, and Optoelectronic Information Engineering. For phase 3 (implementation evaluation), an additional 80 engineering undergraduates were recruited through the same public announcement method, with the requirement of current enrollment in an engineering program. From this cohort, 72 participants completed all tasks, which involved interacting with the co-designed GenAI virtual teamwork trainer and submitting reflection reports. Detailed demographic information about these participants is provided in Table 1.
Table 1. Demographic information of participants in phase 3
Category | Subcategory | Number | Percentage |
|---|---|---|---|
Gender | Male | 45 | 62.5 |
Female | 27 | 37.5 | |
Academic year | Freshman | 39 | 54.2 |
Sophomore | 17 | 23.6 | |
Junior | 13 | 18.1 | |
Senior | 3 | 4.2 | |
School | School of Electronic Information Engineering | 43 | 59.7 |
School of Mechanical Engineering | 10 | 13.9 | |
School of Computer Science and Technology | 7 | 9.7 | |
School of Safety and Emergency Management Engineering | 4 | 5.6 | |
School of Materials Science and Engineering | 3 | 4.2 | |
School of Vehicle and Traffic Engineering | 3 | 4.2 | |
School of Environmental Science and Engineering | 2 | 2.8 | |
Total | 72 | 100.0 |
Three-step HCD approach
The first two phases of the study involved conducting interviews with engineering teachers and students. The initial phase focused on identifying unmet user needs by examining perceived limitations of existing virtual teamwork training programs, while the second phase explored stakeholder perspectives on designing a GenAI trainer for virtual teamwork competency development. Ten teachers participated in one-on-one interviews, focusing on three critical dimensions: their current instructional strategies for fostering virtual teamwork skills, perceived limitations of existing online teamwork training approaches, and recommendations for potential GenAI chatbot features to enhance virtual teamwork competencies. Parallel to this, forty students participated in eight structured focus group sessions, each comprising five participants, where they shared firsthand experiences of virtual teamwork challenges, identified existing approaches to virtual teamwork competency training along with their limitations, and collaboratively generated ideas for GenAI-powered solutions.
The interview protocol was systematically designed to align with these research objectives. The questioning sequence began with an exploration of existing training programs, followed by a comprehensive evaluation of current challenges based on Bushnell's (1990) recommendation that training evaluation should encompass both instructional processes and learning outcomes. Therefore, this section of the research questions addresses not only the design of current training content and instructional formats but also examines students’ reactions to the training and the knowledge and skills they gained. Building on these diagnostic findings, the subsequent phase fostered collaborative researcher-participant dialogue to co-develop potential solutions targeting both curricular content enhancements and instructional delivery improvements. The complete interview protocol is documented in Appendix 1. Following data collection, all interview recordings were transcribed for further analysis. The details of the analysis process are described in the next section.
Based on the findings regarding the identified key challenges and corresponding co-designed solutions, the researcher utilized the Coze GenAI development platform to develop the chatbot. The Coze platform was selected for three main reasons. First, it features a user-friendly interface that simplifies the chatbot creation process, facilitating rapid development through drag-and-drop modules and pre-designed templates, which significantly lowers the technical barrier for AI Chatbot creation. Second, Coze provides more convenient access to Chinese large language models, which is essential since our participants are Chinese students and require a model with strong Chinese language capabilities. Lastly, the platform supports a multi-agent mode, enabling users to create and centrally manage multiple specialized agents for different tasks. This feature proved particularly beneficial during the scenario-based training phase, as it allowed for the dynamic selection of appropriate training scenarios based on individual student needs while optimizing resource consumption through its efficient prompt-based design approach. Although the Coze platform has limitations in customization compared to open-source alternatives, its standardized bot development framework adequately met our study's primary requirements for conversational interaction.
Regarding data security, the study implemented multiple safeguards. First, participants were neither required nor encouraged to disclose personal information during AI interactions, and explicit instructions were given to avoid sharing such details. Second, all dialogue records remained privately accessible to individual participants unless voluntarily submitted for research purposes. Collected data were securely stored on a password-protected computer, with strict protocols to ensure confidentiality and privacy. These records will be retained for 5 years following the publication of relevant research findings before secure disposal.
In the final step, testing and obtaining feedback, 80 engineering undergraduates were invited to interact with the refined GenAI trainer. These students were required to complete the entire training process with the GenAI trainer and submit a reflective task. The guided questions for reflection were designed to gather comprehensive feedback on students' learning experiences, including their perceptions of both the training process (content design and instructional delivery format) and output (self-perceived knowledge gains and behavioral changes), as well as their constructive suggestions for further refining the GenAI trainer's functionality and educational effectiveness (for the complete set of guided questions, see Appendix 2). This systematic approach ensured a thorough evaluation of the tool's performance while maintaining focus on continuous improvement.
Data analysis
The data analysis followed Braun and Clarke's (2006) six-phase thematic analysis framework to ensure systematic and rigorous examination of the qualitative data. To enhance the validity of the analysis, the first author collaborated with an additional researcher with over 3 years of qualitative research experience in conducting the data analysis. In the first phase (familiarization), both researchers immersed themselves in the data through repeated readings of the transcripts while noting initial observations. For the second phase (generating initial codes), two researchers independently conducted line-by-line coding of 50% of the dataset to identify meaningful units of analysis, followed by collaborative discussions to develop a preliminary codebook. During the third phase (searching for themes), the researchers examined codes for potential patterns and grouped them into candidate themes. These themes were then reviewed and refined in the fourth phase (reviewing themes) through an iterative process that assessed their internal coherence, consistency, and relevance to the research objectives. The fifth phase (defining and naming themes) involved precisely delineating each theme's scope and conceptual boundaries, supported by illustrative examples (see Fig. 2 for representative coding examples). The final coding scheme can be found in Appendix 3.
[See PDF for image]
Fig. 2
Illustrative example of qualitative thematic coding
To ensure methodological rigor, the coding process incorporated collaborative verification where both researchers independently applied the refined codebook to 20% of previously uncoded transcripts, achieving 81% agreement with a Cohen's kappa of 0.79. These metrics exceeded the established thresholds of 70% agreement (Hallgren, 2012) and Cohen’s kappa of 0.61 (Landis & Koch, 1977), indicating substantial inter-coder reliability. Discrepancies were resolved through consensus discussions, leading to final codebook adjustments. The first author then completed coding of the remaining transcripts using this validated framework.
Findings
Following data analysis, the findings are structured around the study's three research questions, addressing: the key challenges in improving virtual teamwork competency among engineering students; the human-centered design process that integrated both student and educator insights to develop the AI trainer; and students’ perceptions of the HCD-based GenAI trainer:
What specific challenges hinder engineering students' development of effective virtual teamwork competencies in current training environments? (RQ1)
Through systematic analysis of interview transcripts, this study uncovers four barriers to virtual teamwork competency development in engineering education: lack of systemic instructional support, limited training formats, deficient evaluation and feedback mechanisms, and students' fundamental lack of teamwork understanding.
Insufficient systemic and structured support
The findings first highlight a critical deficiency in systemic and structured support for developing virtual teamwork competency among participants. Both students and educators reported a notable absence of dedicated training programs designed to cultivate these skills within academic settings. When asked about instructional strategies for fostering virtual teamwork skills, teachers predominantly cited group-based learning as their primary approach. For instance, one educator remarked, “I assign group tasks to enhance students’ teamwork skills.” However, such methods often emphasize task completion over the deliberate development of collaborative competencies, as another teacher explained: “To ensure task completion, I sometimes adjust their work or guide them through knowledge gaps.” Students confirmed this observation, noting that institutional training prioritizes project outcomes over foundational teamwork skills. One participant stated, “I think the school’s training focuses too much on the tasks themselves, like completing a project or preparing a report, but doesn’t teach us how to communicate effectively in a team, especially in remote collaboration.”
Given their unstructured design, these approaches proved inadequate, as one student noted the inadequacy of traditional classroom activities in preparing for real-world teamwork dynamics: “In traditional classes, teamwork opportunities are scarce, and tasks rarely surface conflicts. When problems arise, I’m unprepared to resolve them.” This gap became particularly apparent in high-stakes scenarios, such as an online competition where a participant recounted: “We clashed over task delegation, but without prior training in conflict resolution, we wasted time navigating the issue ourselves.”
Limited training designs
A second critical issue lies in training designs that fail to accommodate diverse learner needs. In the few formal teamwork training sessions attended by students, they reported that existing programs predominantly employed passive, lecture-based formats that neither engaged participants nor addressed individual learning requirements. One participant criticized the excessive reliance on lectures, stating, “Nowadays, there's too much lecturing, and students are already tired of it.” Another described their experience with online teamwork training as merely “formalism,” explaining, “Many people treated it as something to get through quickly.” The challenge is exacerbated by significant variations in students' baseline competencies. As one teacher noted, “Students' levels vary considerably in their understanding and depth of knowledge. Higher-level students tend to have stronger virtual teamwork skills, while others are less proactive and weaker in collaboration.” This diversity makes uniform training approaches ineffective. Such variability renders one-size-fits-all lecture methods inadequate, ultimately undermining the efficacy of the training.
Deficiencies in evaluation and feedback mechanisms
The lack of systematic evaluation and process-oriented feedback on teamwork was identified as a further obstacle. Participants noted instructors focused exclusively on final outputs (e.g., reports, presentations) rather than collaboration dynamics. As one student stated: “When we work on team projects, teachers usually only look at the results, like reports or presentations, but rarely pay attention to how we collaborate during the process. We almost never receive feedback on our teamwork.” This omission deprives students of valuable opportunities to critically reflect upon and enhance their collaborative competencies.
Moreover, some instructors admitted hesitating to provide constructive criticism due to concerns about discouraging students. As one teacher explained, “When I point out problems to some students, you can clearly see them shut down—they get defensive and stop opening up, especially when teachers are present. You can almost feel their anxiety, so sometimes I let small issues slide just to avoid putting them under more stress.” This reluctance further limits students' exposure to constructive feedback during critical collaborative learning moments.
Insufficient understanding and awareness of teamwork among students
The data also revealed a fundamental gap in students’ understanding and recognition of teamwork. First, students demonstrated narrow perceptions of teamwork, as exemplified by one participant's statement: “Many of us think teamwork just means holding meetings and splitting up tasks.” Another student echoed this sentiment, pointing out that one reason for this limited perspective is the lack of collaborative opportunities: “Since we haven't had many chances to work in teams, our basic understanding is pretty weak.” Moreover, many students failed to recognize teamwork’s value. A teacher participant emphasized this concern, stating, “Many students do not realize the importance of collaboration. If they can coordinate and cooperate well, the results will far exceed what they can achieve individually.” These findings collectively suggest that improving students' collaborative skills requires addressing both their conceptual understanding of teamwork and fostering their recognition of its fundamental importance.
How can multi-stakeholder collaboration (including educators, students, and researchers) inform the design of a GenAI-based teamwork training tool? (RQ2)
The study's second phase presents a participatory design process for developing a GenAI chatbot to enhance virtual teamwork competency, synthesizing input from key stakeholders (educators and students). Designed for one-on-one student interaction, the chatbot provides personalized guidance tailored to improve these skills. The development process focused on two key aspects: form (interaction modalities) and content (instructional material) to jointly address the distinct challenges identified in the earlier findings.
Form: designing for engagement and personalization
First, the chatbot was designed around a theory–practice integrated approach. As highlighted in the earlier challenges, students often lacked a clear understanding of the theoretical underpinnings of effective team collaboration. This knowledge gap underscored the necessity of incorporating theoretical content into the training program. However, they also criticized traditional lecture-based methods for isolating theory from practice, as exemplified by one participant’s remark: “We once attended a teamwork training where the instructor focused heavily on theory without any practical exercises. When we returned to our team, we still didn’t know how to apply those theories. If there had been some case studies or simulations, it would have been more useful.” This insight underscored the imperative to bridge theory with experiential learning, ensuring that students could internalize and apply the concepts in real-world contexts.
Regarding theoretical instruction enhancement, students identified engagement as the primary shortcoming of traditional formats. As one participant noted: “During an online lecture, the instructor just read from the slides without any interaction, and everyone ended up doing other things. I think more interactive training would be more engaging.” In response, the researchers incorporated the Socratic method into the GenAI trainer’s theoretical training. By employing scaffolded questioning sequences using relevant scenarios or examples, this approach guided students to systematically connect abstract teamwork competencies to concrete online collaboration dynamics. Students progressed from identifying individual behavioral indicators to recognizing their interconnections, thereby developing a more comprehensive grasp of virtual teamwork competency.
For practical skill development, participants emphasized two design imperatives: realism and engagement. “I think the training should be closer to real-life scenarios,” one student proposed, while another added, “Incorporating gamified elements, like team-based games to learn conflict resolution, would make it more enjoyable.” To address these insights, the final design incorporated scenario-based training with gamification, where students interacted with AI-simulated teammates to complete collaborative tasks. Each student-AI interaction was scored based on whether they demonstrated behavioral indicators of virtual teamwork competency during interactions with AI teammates, with 100 points required for task completion. This approach emphasized authentic interaction and process-oriented learning over mere task completion. Furthermore, recognizing the absence of feedback in traditional training programs, the new design embedded formative assessments within these scenarios. This enabled students to receive real-time, actionable feedback on their strengths and areas for improvement. As one instructor noted, “It’s important for students to first understand their strengths and weaknesses. This self-awareness helps them approach team tasks more objectively and collaborate more effectively.” By combining realistic simulations, gamified engagement, and continuous feedback, the training fostered both skill development and reflective practice.
Content: integrating real-world insights into training design
The instructional design process was systematically guided by the dual objectives of developing both theoretical understanding and practical application of virtual teamwork competencies. As outlined earlier, the theoretical component employed the Socratic method, beginning with targeted questions about what students believed were the key elements of effective online team collaboration. Building upon student responses, GenAI then introduced potential challenges related to these identified factors, while strategically guiding students to consider additional critical elements through problem-solving exercises. This pedagogical approach was structured around a behavior-oriented framework specifically developed for engineering students (Hu & Chan, 2025), which systematically outlined key behavioral indicators of virtual teamwork competency and their interrelationships. The framework comprises 15 behavioral indicators organized into three dimensions: Group task dimension (e.g., task analysis, time management), Social dimension (e.g., mutual understanding, conflict management), and Individual task dimension (e.g., ensuring quality completion, familiarity with online tools). Notably, the framework emphasizes the dynamic interplay among these dimensions—for example, illustrating how task analysis facilitates efficient work distribution or how mutual understanding enhances team cohesion and support. By grounding abstract concepts in observable behaviors, the framework provided students with concrete, actionable guidelines that enhanced real-world applicability.
The scenario-based training component was similarly rooted in authentic student experiences. Through in-depth interviews, researchers systematically documented the most significant challenges students faced in online collaborative settings. These empirical findings were then translated into realistic training scenarios, as integrating these authentic challenges fosters a deeper level of engagement and relevance (Herrington & Oliver, 2000). For example, one scenario simulated a team experiencing low morale due to perceived skill deficiencies, requiring participants to practice motivating disengaged teammates—a situation directly informed by a student's account: “During a competition, our team lacked the necessary skills compared to others…the morale was really low.” Another scenario replicated the common challenge of member disengagement as project deadlines approached, based on another student's experience: “One member started avoiding tasks…it really affected the team's morale.” In total, four such evidence-based scenarios were developed, each accompanied by carefully designed collaborative tasks that emphasized process-oriented learning objectives. These real-life-inspired scenarios became the foundation for interactive training, where the AI played the roles of teammates, and students were required to collaborate with the AI to complete tasks, simulating authentic team collaboration challenges. A summary of these scenarios and their corresponding interview data is presented in Appendix 4.
Design outcome: GenAI chatbot for virtual teamwork competency development
The final design of the GenAI chatbot for virtual teamwork competency development was implemented using the Coze platform’s MultiAgent mode, enabling the deployment of multiple AI agents with distinct roles and responsibilities. Interactions between agents were governed by meticulously designed prompts, ensuring a structured and cohesive user experience. Unlike traditional GenAI chatbots that require users to initiate questions, this chatbot actively guides students through a series of interactive training sessions. The chatbot is designed in two phases: Phase One involves reflective discussions on teamwork behavioral indicators. This includes 10 rounds of dialogue using the Socratic questioning method to identify behavioral indicators of effective virtual teamwork, followed by 10 rounds of dialogue to help participants explore the relationships between these indicators. Phase Two consists of scenario-based role-playing exercises, where students interact with a GenAI-driven teammate to address collaboration challenges presented in given scenarios. The GenAI trainer was developed using Coze's built-in Doubao-32K-Pro LLM (developed by ByteDance) in its base, untrained configuration, selected for its rapid response time on the Coze platform and superior Chinese language capabilities—particularly suitable for our native Mandarin-speaking study participants.
Phase 1 was designed to facilitate in-depth reflective discussions, enhancing students’ comprehension of virtual teamwork dynamics. The structured interaction began with students identifying key behavioral indicators they perceived as critical for effective online collaboration. Guided by the GenAI agent, the session employed a 10-round Socratic questioning approach to systematically explore these behavioral indicators within the Virtual Teamwork Competency Framework for Engineering Students. For instance, when students mentioned communication as a crucial collaborative behavior, the GenAI would follow up with scenarios such as: “Online communication can sometimes lead to inaccurate information transmission. How would you handle that situation?” This approach encouraged students to contextualize theoretical concepts within practical challenges. At the end of each round, the agent synthesized the discussed elements, highlighted their significance, and introduced overlooked aspects. The session culminated in a tabular summary of the framework’s behavioral indicators, complete with definitions and real-world examples.
The second part of this phase consisted of 10-round interactions designed to address a critical gap in conventional teamwork training: the tendency to teach collaborative behaviors in isolation, neglecting their systemic relationships. This phase's theoretical framework is also rooted in the Virtual Teamwork Competency Framework for Engineering Students, which posits that effective virtual teamwork arises not only from mastering discrete competencies (e.g., task analysis, mutual support) but also from understanding how these competencies dynamically interact (Hu & Chan, 2025). For instance, task analysis is essential before labor division, and mutual understanding serves as the foundation for providing support and suggestions. Similarly, the GenAI trainer in this phase also employed a question-led approach, using a mix of case studies, exploratory questions, and communication examples. For example, the agent presented a scenario: “Imagine your team is organizing a campus event. First, you analyze the tasks—such as determining the theme, budget, and timeline—and then assign roles based on each member’s strengths. This ensures a more efficient and effective division of labor.” This guided students to recognize the connection between task analysis and team division. After ten rounds of interaction, the agent summarized these connections and presented a detailed table outlining all relevant interconnections, their definitions, and practical examples derived from the framework. This table, combined with the previous one displaying behavioral indicators, served as a guide for students, reinforcing their understanding and preparing them for the next stage of training. Figure 3 illustrates a representative student-GenAI trainer interaction from this phase.
[See PDF for image]
Fig. 3
Interaction examples with the GenAI trainer in phase one and corresponding translations
The scenario training phase involves immersive role-playing exercises where students address specific teamwork challenges. The process begins with the chatbot prompting the user: “What do you think is the most challenging aspect of team collaboration?” Based on the user’s response, the chatbot selects one of four predefined scenarios, each tailored to address common teamwork difficulties. Once a scenario is chosen, the chatbot introduces the scenario with collaborative tasks and rules for the training. To simulate authentic team dynamics, the chatbot assumes the role of "Li Ming," a virtual AI teammate designed to engage users in collaborative problem-solving tasks (see Appendix 1 for detailed scenarios and collaborative tasks). The researchers defined Li Ming's identity in the prompt as an undergraduate student in the mechanical engineering department, ensuring that the character's personality and tone were faithfully portrayed. Additionally, Li Ming's responses needed to evolve progressively in reaction to user inputs. To enhance interaction realism, the researchers embedded dialogue examples within the chatbot's prompt framework, providing concrete models for Li Ming's responses that mirror natural team dynamics.
Users begin with an initial score of 50 points, evaluated based on their communication’s contribution to collaboration. The chatbot's scoring mechanism was carefully designed to operationalize the behavioral indicators from the Virtual Teamwork Competency Framework for Engineering Students (Hu & Chan, 2025). To ensure transparent and consistent evaluation, first, the system's prompts explicitly outlined scoring criteria: for example, on the positive dimension, contributions ranged from basic collaborative etiquette (+ 1 point) to more advanced demonstrations of teamwork, such as constructive feedback with alternative solutions (+ 3 points) and the highest-level integrative solutions that synthesized multiple perspectives (+ 5 points). The negative dimension similarly followed a progressive scale, from minor infractions like low-engagement responses (−1 point) to more serious issues including task avoidance (−3 points) and outright hostile communication (−5 points). Concrete examples were provided for each criterion, which helped ensure consistent application of the scoring rubric across interactions. Additionally, before the formal study, the system underwent a pilot test with three participants to validate the scoring system’s fairness and usability from a user perspective, and their feedback was used to refine the prompts further. As students progress, their scores reflect their collaborative performance, with 100 points marking successful completion of the challenge. After each interaction, users receive immediate feedback detailing points gained or lost, along with explanations for the adjustments. Once a user’s score reaches or exceeds 100 points, the chatbot transitions to a comprehensive feedback phase, providing a detailed performance summary that highlights strengths and areas for improvement. A Phase 2 dialogue example is shown in Fig. 4.
[See PDF for image]
Fig. 4
Interaction examples with the GenAI trainer in phase two and corresponding translations
What are students' perceptions of the HCD-developed GenAI virtual teamwork competency trainer, including their learning experiences and suggestions for improvement? (RQ3)
This study engaged 80 engineering undergraduates in evaluating the co-designed GenAI Trainer. This section summarizes their perceptions of its effectiveness in addressing teamwork challenges, alongside their suggestions for improvement.
Transforming training formats through interactive learning
Participants contrasted the conventional lecture-based methods with the dynamic, interactive approach of the HCD-based trainer. By integrating theoretical instruction with personalized, scenario-based practice, the AI trainer received approval for its ability to accelerate learning and reinforce concepts through immediate application. As one student noted, “This AI trainer helps us learn teamwork faster because it provides immediate practice after explaining the concepts.” Students particularly valued its guided, inquiry-based methodology, exemplified by one participant’s observation: “When I struggled with a concept, the trainer broke it down by posing hypothetical scenarios and walking me through the analysis step-by-step, making the learning process much more manageable.” Others highlighted the AI’s capacity to stimulate deeper reflection through targeted questioning, with one remarking, “Its questions were always sharp and relevant, pushing me to think more deeply about teamwork dynamics.” Notably, participants reported heightened immersion during scenario-based interactions, attributing this to AI’s human-like conversational style. As one student observed, “The AI’s tone often closely resembled natural human speech, which created a sense of realism and actually made me more willing to engage in dialogue.” Together, these features show how AI can reshape training into a dynamic, learner-centered experience.
Providing structured and immediate feedback
Beyond interactive learning, the trainer’s structured feedback mechanism addresses a critical gap in traditional training programs, where formative evaluation is often lacking. Participants particularly emphasized the value of AI-generated feedback, highlighting its distinct advantages over conventional methods. Specifically, the system delivers timely and personalized feedback, enabling immediate performance adjustments. As one participant remarked, “The instant feedback allowed me to assess how my communication choices influenced team dynamics, with scores and detailed explanations clarifying my strengths and weaknesses in real time.” Additionally, the AI’s capacity for objective and progressive guidance was frequently noted. One participant explained, “The strength of GenAI lies in its ability to provide consistent, unbiased feedback while progressively deepening its analysis with each interaction, helping me systematically address communication and collaboration challenges.”
Moreover, participants praised the system’s comprehensive feedback approach, which integrates real-time responses with holistic session summaries. A participant observed, “Each interaction received immediate scoring, while the final evaluation offered broader insights into my weaknesses and team collaboration patterns.” This structured yet adaptive system not only enhanced immediate performance but also supported long-term skill development. Many participants underscored the actionable accuracy of the feedback, with one stating, “The AI identified issues I had overlooked, making the feedback exceptionally practical and insightful.”
Deepening understanding and awareness of teamwork
The AI trainer proved effective in deepening students' understanding of teamwork and strengthening their appreciation for collaboration. One student reflected, “The scenarios didn't just teach methods—they made me realize why teamwork matters. Seeing how active engagement reduced conflicts changed my approach to group work.” This shift in perspective was further reinforced by the trainer's ability to surface overlooked aspects of collaboration, as noted by another participant: “Interacting with the AI robot encouraged me to think more deeply about online teamwork challenges. It introduced methods I hadn't previously considered, enhancing my understanding of how to collaborate online effectively.” The depth of this cognitive shift also manifested in students' reflection on their virtual teamwork competency, as illustrated by one participant's introspection: “From the beginning, the AI guided me through structured reflection, and its feedback during scenario training helped me realize that I'm good at communication and motivating team members, but I need to improve in task allocation and time management.” These qualitative reports collectively demonstrate the trainer's success in cultivating not just procedural knowledge, but metacognitive awareness of teamwork principles.
Enhancing real-world collaboration skills through authentic scenarios
The study also found that the AI trainer's scenario-based approach effectively developed students' practical collaboration skills for real-world applications. Unlike traditional training methods focused solely on task completion, this system emphasized cultivating practical virtual teamwork competencies. As one student explained, “Through the conversations with the AI chatbot, I learned how to communicate effectively in a team and gained practical skills that I can apply in real-life situations.” Notably, the system established a psychologically secure experimental environment conducive to skill development. As one participant elaborated, “The AI-mediated team collaboration training offers distinct advantages by creating a safe, replicable space for iterative practice and refinement of collaborative techniques, while simultaneously enabling the simulation of diverse situational responses.”
Participants particularly valued the authentic scenarios co-developed with researchers, which they found highly relevant to actual teamwork experiences, helping them prepare for real challenges. One commented, “The scenarios designed by the AI robot are very realistic and align with the team collaboration situations I encounter in real life.” Another added, “The scenarios, such as preparing a PPT presentation or participating in a competition, closely mirrored real-world challenges like task allocation, team dynamics, and problem-solving. This allowed me to gain practical experience in a virtual environment.” These findings demonstrate this scenario-based approach equips them with practical skills to tackle real-life collaborative challenges.
Suggestions for improvement
Building on the AI trainer's demonstrated effectiveness in developing virtual teamwork skills, students provided valuable suggestions for further enhancing the system's impact. Many participants emphasized the need for more diverse and customizable training scenarios that could better reflect the complexity of actual team environments. One student suggested, “I hope the AI can simulate team members with different personalities and cultural backgrounds to help us practice handling diverse team dynamics,” while another proposed allowing user-generated scenarios: “It would be great if users could provide their own scenarios, and the AI could analyze and respond to them. This would make the training more applicable to real-life situations.”
Beyond content improvements, students highlighted opportunities to enhance the interface design and overall immersive quality of the training experience. Several participants recommended more sophisticated visual elements, with one noting, “The interface could be improved, such as creating AI avatars to replace the current icons,” and another suggesting, “If possible, I'd like to see expressive character designs and voice narration features added to the training.” These interface enhancements were seen as crucial for increasing engagement and making the simulations feel more authentic.
The desire for deeper immersion emerged as another theme across student feedback. As one participant explained, “I hope to feel more immersed in the team collaboration process, enhancing the realism and engagement of the training.” Others called for more complex team interactions, with a student sharing, “I'd like to hear different perspectives from multiple team members rather than just one-on-one conversations. For example, when facing challenges like uncooperative team members, I would rethink my role and strategies within the team.” These thoughtful recommendations reflect students' engagement with the training system while pointing toward meaningful opportunities to strengthen its ability to prepare learners for real-world teamwork challenges.
Discussion
This study demonstrates how Human-Centered Design (HCD) principles can effectively guide the development of Generative AI (GenAI) tools for virtual teamwork training. By systematically implementing a three-phase HCD approach, we developed a GenAI trainer that participants evaluated as effective. These findings underscore the value of HCD in creating pedagogically sound AI solutions, as well as the potential of GenAI for competency development.
The application of HCD in GenAI educational design
The rapid integration of GenAI technologies into educational settings presents both transformative opportunities and significant implementation challenges (Chan & Hu, 2023). While these advanced systems offer unprecedented capabilities in scalability, adaptability, and personalized instruction, their true educational value fundamentally depends on adopting robust design frameworks that consistently prioritize pedagogical objectives over purely technical considerations (Giannakos et al., 2024). Addressing this critical need, our research implements a comprehensive three-phase Human-Centered Design framework that systematically incorporates stakeholder perspectives at every stage of development—from initial conception to final implementation.
The foundational phase of our approach involved conducting in-depth, semi-structured interviews with engineering instructors and students to examine current challenges in virtual teamwork training. Through this process, we identified specific areas where existing training methods fell short of meeting both instructor and student requirements. By focusing on these documented shortcomings rather than assumed needs, we ensured our solution addressed verifiable problems in virtual teamwork education (Antonenko et al., 2017). This diagnostic approach aligns with educational research emphasizing that effective AI integration should bridge the divide between existing instructional challenges and unaddressed learner requirements (Gârdan et al., 2025; Zawacki-Richter et al., 2019).
In the second phase, we engaged in an intensive co-design process with stakeholders to develop the GenAI-powered virtual teamwork trainer. Structured around solving the specific problems identified in Phase 1, this collaborative approach ensured the tool addressed real user barriers. Research indicates that tools developed with a clear understanding of user challenges better support instructional practices and mitigate classroom-specific difficulties (Cha & Ahn, 2020), thereby promoting higher learner motivation and retention (Alhosban et al., 2024). Moreover, the active involvement of students and teachers as co-designers enhanced transparency and explainability, factors critical for building trust in AI systems (Wünn et al., 2024).
The third phase focused on collecting and incorporating user feedback to evaluate and refine the tool. Gathering student perspectives was particularly vital, as research emphasizes that learner insights are essential for assessing educational technology’s usability and effectiveness (Tawfik et al., 2024). This iterative process allowed for data-driven adjustments based on real user experiences. Furthermore, soliciting feedback empowered users to contribute actively to the tool’s evolution, fostering a sense of ownership and engagement (Iivari, 2009). Such involvement not only enhances the tool’s relevance but also increases user satisfaction and long-term adoption.
The positive outcomes of this approach provide empirical support for HCD’s efficacy in educational AI development. This is consistent with existing research, as demonstrated by Bissett-Johnson and Radcliffe (2021), which highlights how user-focused design in educational tools can create meaningful interventions that align closely with real-world learning outcomes. Additionally, studies indicate that when students perceive technology as genuinely enhancing their abilities, they are significantly more likely to adopt and integrate it into their practices (e.g., Lai et al., 2012). Thus, the HCD approach not only mitigates resistance to change but also facilitates sustainable, pedagogically grounded integration of AI in education.
The application of GenAI in enhancing collaborative competencies
In this study, we specifically applied GenAI technology to address challenges in developing virtual teamwork competencies. First, through interviews with stakeholders, we identified several critical limitations in current online collaboration training programs: (1) insufficient systemic and structured support for developing these competencies, (2) limited and often ineffective training formats, (3) lack of comprehensive evaluation systems, and (4) students' insufficient understanding and awareness of the importance of teamwork skills. These issues can be summarized into three main areas: first, the lack of adequate pedagogical support in the instructional content; second, the reliance on singular and ineffective training formats; and lastly, a lack of comprehensive evaluation systems. These findings align with existing literature, which highlights that current teamwork competency training tends to overemphasize task completion and technological solutions in content design while neglecting process-oriented feedback and evaluation (e.g. Singleton et al., 2022). Consequently, there is little focus on cultivating students’ actual collaborative skills during online interactions (Hu & Chan, 2024; Myers et al., 2014). Moreover, the rigid and monotonous nature of traditional training formats often fails to equip students with practical problem-solving skills—such as conflict resolution—leaving them ill-prepared to apply theoretical knowledge in real-world collaborative settings (Gutiérrez et al., 2022).
The co-designed GenAI trainer developed in this study addresses these challenges by leveraging several unique advantages of GenAI technology. First, the GenAI trainer provides personalized learning experiences, facilitating a crucial shift in the educational paradigm from passive knowledge acquisition to active participation and practical application, which enhances both student engagement and learning outcomes (Chi & Wylie, 2014). For example, in the theoretical training components, the GenAI trainer employs Socratic questioning techniques to guide students toward a deeper understanding of virtual teamwork competencies. Participant feedback confirms this approach's efficacy in stimulating reflective practice and deepening conceptual understanding—findings that corroborate established research on Socratic methods' effectiveness in developing critical thinking (Elder & Paul, 1998). However, implementing such personalized, dialog-based instruction is particularly challenging in traditional classroom settings with large student numbers and limited instructor time—precisely where GenAI's scalability provides a distinct advantage. Recent studies support this application, demonstrating that GenAI's capacity to personalize instructional content has been shown to foster deeper understanding and more meaningful student participation (ElSayary, 2024; Salinas-Navarro et al., 2024).
Another key feature of our GenAI trainer is its ability to simulate authentic team conflicts and collaborative challenges through pre-programmed AI role-playing. This innovative approach fundamentally addresses a well-documented limitation in conventional training—the lack of exposure to complex interpersonal dynamics. By immersing learners in carefully designed, psychologically safe yet challenging scenarios that would be difficult to replicate in physical settings, the system enables iterative development of real-world collaboration strategies. The pedagogical efficacy of this approach is supported by previous research. For instance, the research by Mollick et al. (2024) offers students the opportunity to practice skills in scenarios that involve AI-generated mentors, role-players, and instructor-facing evaluators, thereby fostering higher-order problem-solving skills. Similarly, studies in professional education—particularly in business and law—have shown that AI-driven simulations significantly enhance critical thinking and communication competencies (Wu, 2024). By replicating high-stakes, low-frequency scenarios, GenAI ensures learners gain experiential training that would otherwise be inaccessible in conventional settings.
The GenAI trainer also excels in its ability to provide immediate, detailed feedback on students' virtual teamwork competency development. Compared to traditional training methods where feedback may be delayed or limited by instructor availability, this real-time feedback capability significantly accelerates the learning curve. Importantly, the GenAI's feedback is also consistently constructive, objective, and delivered in a psychologically safe environment—characteristics that make it particularly acceptable and valuable to students. This application aligns with emerging research on GenAI's role in higher education, where it has been shown to enhance formative assessment processes by providing individualized, peer-simulated feedback (Mcguire et al., 2024). Systematic reviews of educational technology implementations also indicate that GenAI-powered feedback systems can significantly improve accessibility, reduce instructor workload, and enhance student engagement (Lee & Moore, 2024).
The GenAI trainer extends beyond immediate training benefits by advancing two key areas. First, it promotes self-regulated learning by fostering metacognitive awareness—the system’s personalized feedback and scenario-based design encourages learners to critically reflect on and adapt their collaborative strategies. Second, the trainer can serve as an innovative assessment tool, capturing interaction data during AI-mediated simulations to provide detailed and objective insights into individual competencies that traditional evaluation methods often overlook. Together, these capabilities position the GenAI trainer not just as a training solution, but as a transformative platform for the continuous improvement and certification of virtual teamwork competency in educational and professional settings.
Conclusion
This study makes significant contributions to the ongoing theoretical discourse on integrating Human-Centered Design principles with AI-driven educational tools. By systematically demonstrating the efficacy of our three-phase HCD approach (identifying and specifying user needs, ideating and co-designing solutions, and testing and obtaining feedback), we provide researchers and practitioners with a replicable model for developing educational technologies that successfully balance pedagogical rigor with user-centric adaptability. Following this carefully designed process, we created a specialized GenAI chatbot trainer focused specifically on developing virtual teamwork competency, which combines Socratic questioning techniques with realistic scenario-based simulations of collaborative challenges. Extensive feedback from student participants confirmed that this co-designed GenAI trainer effectively addressed the key issues identified in current training programs while also providing valuable insights for future optimizations and enhancements.
Limitations and future directions
While this study provides valuable insights, several limitations should be noted. First, the participant pool was limited to a single higher education institution in mainland China, which may restrict the generalizability of the findings to other cultural or institutional contexts. Secondly, the evaluation primarily relied on student self-reported feedback. While this approach provides useful insights, the findings could be strengthened by incorporating more objective assessment measures and longitudinal performance data for validation. To further advance this line of research, several promising directions can be explored. First, employing mixed-method evaluation approaches would enable a more comprehensive assessment of both short-term learning outcomes and long-term competency retention, as well as the transfer of skills to real-world team environments. Additionally, integrating emerging technologies—such as combining virtual reality (VR) with the current GenAI trainer—could create more immersive and realistic collaborative training experiences. Finally, expanding beyond the one-on-one format to develop multi-user scenarios would better simulate authentic team dynamics, further enhancing the training system’s practical applicability.
Acknowledgements
Not applicable.
Author contributions
HWJ: Conceptualization; Formal Analysis; Investigation; Methodology; Resources; Visualization; Writing – Original Draft Preparation; Writing – Review & Editing. CKY: Conceptualization; Supervision; Writing – Review & Editing.
Funding
Not applicable.
Data availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Competing interests
The authors declare that they have no competing interests.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Adel, A; Ahsan, A; Davison, C. ChatGPT promises and challenges in education: computational and ethical perspectives. Education Sciences; 2024; [DOI: https://dx.doi.org/10.3390/educsci14080814]
Alfaro Arias, V., Hurst, A., & Perr, A. (2020). Designing a Remote Framework to Create Custom Assistive Technologies. Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility, 1–4. https://doi.org/10.1145/3373625.3418022
Alfredo, R; Echeverria, V; Jin, Y; Yan, L; Swiecki, Z; Gašević, D; Martinez-Maldonado, R. Human-centred learning analytics and AI in education: A systematic literature review. Computers and Education: Artificial Intelligence; 2024; 6, 100215. [DOI: https://dx.doi.org/10.1016/j.caeai.2024.100215]
Alhosban, A., Amoush, R., & Al-Ababneh, H. (2024). ALT-D: Enhancing Accessibility with an Adaptive Learning Technologies Assessment Model for Students with Disabilities. 2024 IEEE 30th International Conference on Telecommunications (ICT), 1–5. https://doi.org/10.1109/ICT62760.2024.10606129
AlZoubi, D; Baran, E; Karabulut-Ilgu, A; Morales, AS; Gilbert, SB. From concept to classroom: Developing instructor dashboards through human centered design. Computers and Education Open; 2024; 7, 100234. [DOI: https://dx.doi.org/10.1016/j.caeo.2024.100234]
Antonenko, PD; Dawson, K; Sahay, S. A framework for aligning needs, abilities and affordances to inform design and practice of educational technologies. British Journal of Educational Technology; 2017; 48,
Bissett-Johnson, K; Radcliffe, DF. Engaging engineering students in socially responsible design using global projects. European Journal of Engineering Education; 2021; 46,
Branch, RM. Instructional design: The ADDIE approach; 2009; Springer US: [DOI: https://dx.doi.org/10.1007/978-0-387-09506-6]
Braun, V; Clarke, V. Using thematic analysis in psychology. Qualitative Research in Psychology; 2006; 3,
Bushnell, DS. Input, process, output: A model for evaluating training. Training and Development Journal; 1990; 44,
Cha, HJ; Ahn, ML. Design and development of a smart-tool prototype to promote differentiated instruction: A user-centered design approach. Interactive Learning Environments; 2020; 28,
Chan, CKY; Colloton, T. Generative AI in higher education: The ChatGPT effect; 2024; Routledge: [DOI: https://dx.doi.org/10.4324/9781003459026]
Chan, CKY; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education; 2023; 20,
Chi, MTH; Wylie, R. The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist; 2014; 49,
Croy, W; Eva, N. Student success in teams: Intervention, cohesion and performance. Education and Training; 2018; 60, pp. 1041-1056. [DOI: https://dx.doi.org/10.1108/ET-11-2017-0174]
Darling, MG; Owusu, SK; Botchwey, M; Asenso, D. The dark side of artificial intelligence in education: A critical analysis of its impact on learners aged 12–14 years. Journal of Artificial Intelligence, Machine Learning and Neural Network; 2024; 4,
Dimitriadis, Y; Martínez-Maldonado, R; Wiley, K. Human-centered design principles for actionable learning analytics. Research on e-learning and ICT in education; 2021; Cham, Springer: pp. 277-296. [DOI: https://dx.doi.org/10.1007/978-3-030-64363-8_15]
Dincă, M; Luştrea, A; Craşovan, M; Oniţiu, A; Berge, T. Students’ perspectives on team dynamics in project-based virtual learning. SAGE Open; 2023; 13,
Dumond, P. (2022). Exploring Virtual Methods for Teaching Engineering Teamwork. Proceedings of the Canadian Engineering Education Association (CEEA). https://doi.org/10.24908/pceea.vi.15975
Elder, L; Paul, R. The role of socratic questioning in thinking, teaching, and learning. The Clearing House; 1998; [DOI: https://dx.doi.org/10.1080/00098659809602729]
ElSayary, A. Integrating generative AI in active learning environments: Enhancing metacognition and technological skills. Journal of Systemics, Cybernetics and Informatics; 2024; 22,
Fatani, M; Banjar, H. Web-based expert bots system in identifying complementary personality traits and recommending optimal team composition. International Journal of Advanced Computer Science and Applications (IJACSA); 2024; 15,
Følstad, A; Knutsen, J. Online user feedback in early phases of the design process: Lessons learnt from four design cases. Advances in Human-Computer Interaction; 2010; 2010,
Friedrich, J; Brückner, A; Mayan, J; Schumann, S; Kirschenbaum, A; Zinke-Wehlmann, C. Human-centered AI development in practice—insights from a multidisciplinary approach. Zeitschrift Für Arbeitswissenschaft; 2024; 78,
Gârdan, IP; Manu, MB; Gârdan, DA; Negoiță, LDL; Paștiu, CA; Ghiță, E; Zaharia, A. Adopting AI in education: Optimizing human resource management considering teacher perceptions. Frontiers in Education; 2025; [DOI: https://dx.doi.org/10.3389/feduc.2025.1488147]
Garreta-Domingo, M; Sloep, PB; Hernández-Leo, D. Human-centred design to empower “teachers as designers”. British Journal of Educational Technology; 2018; 49,
Giannakos, M; Azevedo, R; Brusilovsky, P; Cukurova, M; Dimitriadis, Y; Hernandez-Leo, D; Järvelä, S; Mavrikis, M; Rienties, B. The promise and challenges of generative AI in education. Behaviour & Information Technology; 2024; [DOI: https://dx.doi.org/10.1080/0144929X.2024.2394886]
Griesbaum, J; Gortz, M. Using feedback to enhance collaborative learning: An exploratory study concerning the added value of self- and peer-assessment by first-year students in a blended learning lecture. International Journal on E-Learning; 2010; 9,
Gutiérrez, BF; Glimäng, MR; Sauro, S; O’Dowd, R. Preparing students for successful online intercultural communication and collaboration in virtual exchange. Journal of International Students; 2022; 12, pp. 149-167. [DOI: https://dx.doi.org/10.32674/jis.v12iS3.4630]
Hallgren, KA. Computing inter-rater reliability for observational data: An overview and tutorial. Tutorials in Quantitative Methods for Psychology; 2012; 8,
Hao, X; Cukurova, M. Wang, N; Rebolledo-Mendez, G; Dimitrova, V; Matsuda, N; Santos, OC. Exploring the effects of “AI-generated” discussion summaries on learners’ engagement in online discussions. Artificial intelligence in education posters and late breaking results, workshops and tutorials, industry and innovation tracks, practitioners, doctoral consortium and blue Sky; 2023; Springer Nature Switzerland: pp. 155-161.
Heiner, CE; Schnaithmann, C; Kaiser, N; Hagen, R. Fostering student participation with design thinking in higher education. International Journal of Management and Applied Research; 2023; 10,
Herrington, J; Oliver, R. An instructional design framework for authentic learning environments. Educational Technology Research and Development; 2000; 48,
Hu, W; Chan, CKY. Evaluating technological interventions for developing teamwork competency in higher education: A systematic review and meta-ethnography. Studies in Educational Evaluation; 2024; 83, 101382. [DOI: https://dx.doi.org/10.1016/j.stueduc.2024.101382]
Hu, W., & Chan, C. K. Y. (2025). Closing the Evaluation Gap: Developing a Behavior-Oriented Framework for Assessing Virtual Teamwork Competency (No. arXiv:2504.14531). arXiv. https://doi.org/10.48550/arXiv.2504.14531
IDEO. The field guide to human-centered design: Design kit; 2015; 1 Design Kit:
Iivari, N. “Constructing the users” in open source software development: An interpretive case study of user participation. Information Technology & People; 2009; 22,
Ikonen, J., Knutas, A., Wu, Y., & Agudo, I. (2015). Is the world ready or do we need more tools for programming related teamwork? Proceedings of the 15th Koli Calling Conference on Computing Education Research, 33–39. https://doi.org/10.1145/2828959.2828978
Ingram, C; Langhans, T; Perrotta, C. Teaching design thinking as a tool to address complex public health challenges in public health students: A case study. BMC Medical Education; 2022; 22,
Kelly, AE; Clinton-Lisell, V; Klein, KA. Enhancing college students’ online group work perceptions and skills using a utility-value intervention. Online Learning; 2022; 26,
Kloos, C. D., Dimitriadis, Y., Hernández-Leo, D., Alario-Hoyos, C., Martínez-Monés, A., Santos, P., Muñoz-Merino, P. J., Asensio-Pérez, J. I., & Safont, L. V. (2022). H2O Learn - Hybrid and Human-Oriented Learning: Trustworthy and Human-Centered Learning Analytics (TaHCLA) for Hybrid Education. 2022 IEEE Global Engineering Education Conference (EDUCON), 94–101. https://doi.org/10.1109/EDUCON52537.2022.9766770
Kong, S-C; Yang, Y. A human-centered learning and teaching framework using generative artificial intelligence for self-regulated learning development through domain knowledge learning in K–12 settings. IEEE Transactions on Learning Technologies; 2024; 17, pp. 1562-1573. [DOI: https://dx.doi.org/10.1109/TLT.2024.3392830]
Lai, C; Wang, Q; Lei, J. What factors predict undergraduate students’ use of technology for learning? A case from Hong Kong. Computers & Education; 2012; 59,
Landis, JR; Koch, GG. The measurement of observer agreement for categorical data. Biometrics; 1977; 33,
Lee, SS; Moore, RL. Harnessing generative AI (GenAI) for automated feedback in higher education: A systematic review. Online Learning; 2024; 28,
Linnes, C. Embracing the challenges and opportunities of change through electronic collaboration. International Journal of Information Communication Technologies and Human Development; 2020; 12,
Lipnack, J; Stamps, J. Virtual teams: People working across boundaries with technology; 2008; John Wiley & Sons:
Liu, J; Li, S; Dong, Q. Collaboration with generative artificial intelligence: An exploratory study based on learning analytics. Journal of Educational Computing Research; 2024; 62,
Mcguire, A; Qureshi, W; Saad, M. A constructivist model for leveraging GenAI tools for individualized, peer-simulated feedback on student writing. International Journal of Technology in Education; 2024; 7,
Mohseni, Z; Masiello, I; Martins, RM. Co-developing an easy-to-use learning analytics dashboard for teachers in primary/secondary education: A human-centered design approach. Education Sciences; 2023; 13,
Mollick, E., Mollick, L., Bach, N., Ciccarelli, L. J., Przystanski, B., & Ravipinto, D. (2024). AI Agents and Education: Simulated Practice at Scale (No. arXiv:2407.12796). arXiv. https://doi.org/10.48550/arXiv.2407.12796
Myers, TS; Blackman, A; Andersen, T; Hay, R; Lee, I; Gray, H. Cultivating ICT students’ interpersonal soft skills in online learning environments using traditional active learning techniques. Journal of Learning Design; 2014; 7,
Nagy, E., Sik, D., Biczo, Z., Zimányi, K., Pörzse, G., & Molnár, G. (2024). Advanced Digital and Artificial Intelligence-Based Solutions for Interactive, Collaborative Learning Support. 2024 IEEE 7th International Conference and Workshop Óbuda on Electrical and Power Engineering (CANDO-EPE), 103–108. https://doi.org/10.1109/CANDO-EPE65072.2024.10772869
Pazos, P., & Magpili Smith, N. C. (2015). Facilitating team processes in virtual team projects through a web-based collaboration tool and instructional scaffolds. ASEE Annual Conference and Exposition, Conference Proceedings, 122.
Pazos, P., Magpili, N., Zhou, Z., & Rodriguez, L. (2016). Developing Critical Collaboration Skills in Engineering Students: Results From an Empirical Study. 2016 ASEE Annual Conference Proceedings. https://doi.org/10.18260/p.26750
Rodríguez-Ortiz, MA; Avalos, DGL; Quiles, KMA; Gaytan-Lugo, LS. From transcription to empathy: employing artificial intelligence tools in user-centered design for an online assessment platform. Avances En Interacción Humano-Computadora; 2024; 9,
Romero, M. (2024). Collaborative Design of AI-Enhanced Learning Activities. arXiv Preprint arXiv:2407.06660.
Salinas-Navarro, DE; Vilalta-Perdomo, E; Michel-Villarreal, R; Montesinos, L. Using generative artificial intelligence tools to explain and enhance experiential learning for authentic assessment. Education Sciences; 2024; 14,
Sanders, EB-N; Stappers, PJ. Co-creation and the new landscapes of design. CoDesign; 2008; 4,
Schmidt, M; Lu, J; Huang, R; Francois, M-S; Lee, M; Wang, X; Feijóo-García, PG. Participatory, human-centered, equitable, neurodiverse, and inclusive XR: Co-design of extended reality with autistic users. Educational Technology & Society; 2024; 27,
Schmutz, JB; Outland, N; Kerstan, S; Georganta, E; Ulfert, A-S. AI-teaming: Redefining collaboration in the digital era. Current Opinion in Psychology; 2024; 58, 101837. [DOI: https://dx.doi.org/10.1016/j.copsyc.2024.101837]
Schulze, J; Krumm, S. The “virtual team player”: A review and initial model of knowledge, skills, abilities, and other characteristics for virtual collaboration. Organizational Psychology Review; 2017; 7,
Singleton, JA; Watson, KE; Kenyon, JJ. GATES: An online step-wise tool to develop student collaborative teamwork competencies. Innovative Higher Education; 2022; 47,
Skywark, ER; Chen, E; Jagannathan, V. Using the design thinking process to co-create a new, interdisciplinary design thinking course to train 21st century graduate students. Frontiers in Public Health; 2022; [DOI: https://dx.doi.org/10.3389/fpubh.2021.777869]
Stanford University d.school. (2018). Design Thinking Bootcamp Bootleg. https://dschool.sfo3.digitaloceanspaces.com/documents/dschool_bootleg_deck_2018_final_sm2-6.pdf
Sullivan, FR; Keith, PK. Exploring the potential of natural language processing to support microgenetic analysis of collaborative learning discussions. British Journal of Educational Technology; 2019; 50,
Tawfik, A; Schmidt, M; Payne, L; Huang, R. Advancing understanding of learning experience design: Refining and clarifying definitions using an eDelphi study approach. Educational Technology Research and Development; 2024; 72,
Triantafyllakos, GN; Palaigeorgiou, GE; Tsoukalas, IA. We!Design: A student-centred participatory methodology for the design of educational applications. British Journal of Educational Technology; 2008; 39,
Tu, J-C; Liu, L-X; Wu, K-Y. Study on the learning effectiveness of stanford design thinking in integrated design education. Sustainability; 2018; 10,
Wang, Q; Rasmussen, A. CO-VID-EO: resilient hybrid learning strategies to explicitly teach team skills in undergraduate students. Authorea Preprints; 2020; [DOI: https://dx.doi.org/10.22541/au.159526771.13374879]
Wei, S; Tan, L; Zhang, Y; Ohland, M. The effect of the emergency shift to virtual instruction on student team dynamics. European Journal of Engineering Education; 2024; 49,
Wu, S. Artificial intelligence-enhanced learning: A new paradigm in the “business data analysis and application” course. Journal of Contemporary Educational Research; 2024; 8,
Wünn, T., Sent, D., Peute, L. W. P., & Leijnen, S. (2024). Trust in Artificial Intelligence: Exploring the Influence of Model Presentation and Model Interaction on Trust in a Medical Setting. In Artificial Intelligence. ECAI 2023 International Workshops (Vol. 1948, pp. 76–86). Springer International Publishing AG. https://doi.org/10.1007/978-3-031-50485-3_6
Zawacki-Richter, O; Marín, VI; Bond, M; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education – where are the educators?. International Journal of Educational Technology in Higher Education; 2019; 16,
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.