Content area
With the advent of Large Language Models (LLMs), there are becoming a larger part of people's everyday lives - in their work, personal life or learning. Especially for programmers and software developers, learning how to best utilize LLMs as part of their work is becoming a crucial skill. This is especially important to students and educators have duty to prepare them to best tackle all obstacles and best utilize AI as a tool in their programming arsenal. Research into this normally focuses on the use of LLMs as tools for teaching and evaluation. This research takes another approach presenting the results from integrating LLMs as a central concept of project-based learning (PBL) semester projects for students from multiple grades from 5th semester bachelor's to 10th semester masters. All projects develop interactive systems both traditional and virtual reality and encompass a wide variety of contexts that utilize AI as a central mechanic. We show the attitude of the participating students towards utilizing LLMs, their understanding before and after the projects of AI systems and their overall satisfaction with utilizing relatively new and open technology like LLMs. To our knowledge, this is one of the first such meta-analyses of long-term effects of utilizing LLMs in students' work. We demonstrate the positive impact of utilizing LLMs on students' motivation and learning and propose several best practices to avoid some of the pitfalls associated with using these tools.
Abstract: With the advent of Large Language Models (LLMs), there are becoming a larger part of people's everyday lives - in their work, personal life or learning. Especially for programmers and software developers, learning how to best utilize LLMs as part of their work is becoming a crucial skill. This is especially important to students and educators have duty to prepare them to best tackle all obstacles and best utilize AI as a tool in their programming arsenal. Research into this normally focuses on the use of LLMs as tools for teaching and evaluation. This research takes another approach presenting the results from integrating LLMs as a central concept of project-based learning (PBL) semester projects for students from multiple grades from 5th semester bachelor's to 10th semester masters. All projects develop interactive systems both traditional and virtual reality and encompass a wide variety of contexts that utilize AI as a central mechanic. We show the attitude of the participating students towards utilizing LLMs, their understanding before and after the projects of AI systems and their overall satisfaction with utilizing relatively new and open technology like LLMs. To our knowledge, this is one of the first such meta-analyses of long-term effects of utilizing LLMs in students' work. We demonstrate the positive impact of utilizing LLMs on students' motivation and learning and propose several best practices to avoid some of the pitfalls associated with using these tools.
Keywords: Large Language Models (LLMs), Interactive Systems, Project-based Learning (PBL), Student Education, Programming
1. Introduction
Students increasingly use LLMs for brainstorming, writing, editing, design, development, and programming. Educators must guide their use to prevent shallow learning and over-reliance while maximizing their potential to enhance STEM education, creativity, and project development (Wu, Duan & Ni, 2024). LLMs are also much versatile tools that can be employed as parts of interactive applications for making them more robust, reliable, and boosting the possible visualization, interaction and generational possibilities with traditional procedural algorithms (Kapania et al., 2024). Harnessing their power requires creative and outside-of-the-box thinking, as literature on human-computer interactions with LLMs is still being developed and refined (Pang et al., 2025). The best way for developing such skills is by building applications that utilize AI and facing all the decisions, problems and skills needed. Again, educators should be the ones to facilitate this process by providing projects that tackle real-world problems using LLMs.
Like other emerging technologies such as extended reality (XR), the internet of things (IoT), and personal robotics, integrating LLMs into university curricula helps students engage with them in a safe and natural environment (Ferreira & Qureshi, 2020; Maenpaa et al., 2017; Jung, 2012). Understanding LLMs requires knowledge of mathematics, statistics, machine learning, and deep learning. While not all students receive the full theoretical background needed to develop such models, they should still be able to "look inside the black box," adjust them, and apply them effectively. Project-based learning (PBL) offers a strong framework for this, linking core subjects with modern technologies like LLMs through real-world projects (Park & Ertmer, 2007; Zhang & Zhang, 2024; Nikolov, 2024). Such projects not only strengthen learning but also enhance student portfolios. Since LLMs are expected to become integral to many future jobs, this approach prepares students for the evolving workforce (Eloundou et al., 2024).
LLMs have been successfully integrated as parts of games (Gallotta et al., 2024), learning applications (Pang et al., 2025), visualization tools (Khan et al., 2025), accounting, medical systems (Qiu et al., 2024), historical preservation and museum work (Zhang et al., 2024), personal assistants, among others. Looking at all the possibilities we initialized a call for students willing to select LLM-focused projects between autumn 2023 and spring of 2025. The project call was done both for bachelor and master students and was intentionally left open for interpretation, with the possibility for working with external partners. In that period 6 projects with groups of 2 to 5 students were initialized with topics ranging from LLMs as guides and storytellers, to non-playable characters and audience. Five of the projects were concluded successfully, with one still ongoing. A metaanalysis of the projects showed that as part of 4 of them an in-depth investigation of deep learning and LLM- technologies was made, 4 projects were made multimodal using additional speech-to-text and text-to-speech modules and 4 projects combined LLM technology with virtual reality. After completing their projects, students were surveyed about their experiences and their learning goals. An overwhelming majority of them specified that they would utilize LLMs and deep learning as a main building block in their projects in the future. Students felt inspired to explore deep learning further, with many reported improved programming and game development skills. The main challenges cited were prompt engineering difficulties, hardware-related latency, and the unpredictable nature of LLM interactions.
The main contributions of this work are:
* A report and meta-analysis of the integration of LLM-based student semester projects in a projectbased learning context.
* Overview of the experiences and learnings of the students involved in the projects.
* A set of recommendations and best practices that can be useful to other researchers and educators planning to propose LLM-centered projects.
2. State of the Art
2.1 LLMs in Higher Education
Large language models (LLMs) are increasingly used as teaching tools, supporting programming, writing, research, and mathematics (Ta et al., 2023). This is particularly valuable given limited teacher resources, growing class sizes, and expanding demands for higher education (Bennedsen & Caspersen, 2008). LLM-based educational agents can generate customized exercises and feedback (Estévez-Ayres et al., 2024). While they provide specific feedback, repetition and prompt refinement are often needed for more natural explanations (Letteri & Vittorini, 2024). As exercise providers, LLMs show promise by tailoring tasks to student levels and combining multiple agents for more complete learning experiences (Jury et al., 2024; Song, Zhang & Xiao, 2024). However, challenges remain, including hallucinations, lack of transparency, and limited reasoning, which hinder broader adoption. Most research focuses on LLMs as tutors, helpers, or exercise designers (Yan, Nakajima & Sawada, 2024), while fewer studies examine their role as core components in student projects. Yet, integrating new technologies and programming concepts in projects has been shown to enhance learning outcomes and accelerate the transition from novice to experienced learners (Iftikhar, Guerrero-Roldán & Mor, 2022; Wekerle, Daumiller & Kollar, 2022).
2.2 Integrating LLMs in Human-Computer Interaction Projects
More and more fields utilize LLMs as a core building block, leveraging the possibilities of generating dynamic content on the fly (Kapania et al., 2024), the greater immersive properties that come with it (Cite), and the new interactions that become possible with it (Yang et al., 2024). The LLM-ification of the human-computer interaction space has been rapid and expanding (Pang et al., 2025), encompassing different roles, use cases, and depths of integration, opening even more possibilities of for closer interactions (Wan et al., 2024). Some examples of these are shown next. LLMs are being used in medical interactive applications for communication with patients (Yang et al., 2024), dynamic decision systems (Rajashekar et al., 2024), and personalized health care helpers (Cardenas et al., 2024). They are also used in game development as collaborative game assistants (Sidji, Smith, & Rogerson, 2024), social VR game agents (Wan et al., 2024), or interactive game storytellers (Yong & Mitchell, 2023). In cultural preservation and museums LLMs are used as tour guides (Wang et al., 2024), for personalizing exhibitions (Constantinides et al., 2024), and for enhancing cultural representation (Hutson, 2024). These examples demonstrate the versatility of the technology, but also the need for a deeper understanding of LLMs correct and ethical usage, as well as the many hurdles in integrating them into applications from user comfort, problems with interactions, the need for larger hardware resources, among others. Novice programmers need to be able to orient themselves with these new requirements, so they can more easily step into the job market.
We see that current pedagogy and HCI research into the use of LLMs focuses on their use as educational tools, and not in a more meta-analysis way - how utilizing them in exercise and project assignments can push students to more organically understand the technology. Even with their complexity generative models and LLMs are still tools, and the best way to learn a tool is to use it to build projects. In this paper we look at exactly that - how project-based learning assignments that were specifically directed towards understanding and utilizing LLMs have in the long run influenced students. To our knowledge currently there is a very low amount of such research, due to the relatively small amount of time that LLMs have become widely spread.
3. Projects Overview
To get a better understanding of how utilizing LLMs in project-based learning semester projects can affect students learning and interest in machine learning we conducted a pilot study. Between autumn of 2023 and spring 2025, as part of the bachelor's and master's education of STEM students we proposed several LLM-centric semester projects that students could choose to work on as part of their PBL education. As students are free to choose from a pool of projects or propose their own variants of projects, the LLM-based project proposals were put in a larger pool with other non-LLM ones. The proposed LLM projects were made on purpose vaguer, as to give the possibility of students pushing them in an interesting and stimulating direction. If interest was expressed by the students the projects could also be connected to external collaborators like companies, municipalities and such. Student groups were formed based on the project and can be between two and five people. As part of their curriculum students have two courses - one in bachelor's and one in the master's where they learn about machine learning and AI, and to a smaller extend about deep learning. Development and training of LLMs is not formally taught as part of the curriculum, making all learning around it based on project development. Figure 1 shows the idea of the semester project with LLMs as a core concept, together with other core technologies required by the curriculum shown as blue circles and additional concepts that students can focus on if they align with their interests in green.
3.1 Used Technologies
Over four semesters, six student groups pursued LLM-centric projects: two fifth-semester bachelor groups in autumn 2023, one sixth-semester bachelor, one eighth-semester master, and one tenth-semester master group in spring 2024, and one sixth-semester bachelor group in spring 2025. All projects focused on human-computer interaction and were built in Unity. Four combined LLMs with VR to enhance immersion and dynamic interactions. Another four integrated multimodal communication, using speech-to-text and text-to-speech libraries such as Whisper and Polly. Two addressed challenges of text generation speed and its effect on interaction, while three explored extending conversations with facial expressions and gestures for richer context. Finally, four projects conducted in-depth analyses of LLM models and architectures, including head-tohead comparisons, literature reviews, and preliminary user testing. Table 1 summarizes the technologies and evaluations.
3.2 Project Themes
Two of the projects were developed with external partners in mind. The first one was a collaborative work with a museum for building an immersive experience where LLMs were used as guides to the users, which played roles in the historical period. The immersion and the sense of being there of users was tested between using scripted answers and fully dynamic LLM-power answers from the digital guides. This project was later extended to a second project looking at the possibility of LLMs also used for gestures and facial expressions on nonplayable characters (NPCs). There user interactions and sense of realism were compared between LLM-powered museum guides which could and could not provide facial expressions and gestures. The third project investigated the difference between spoken and menu interface interactions with LLM NPCs in a murder mystery game and how immersed and present users tended to be. The fourth project was directed towards LLMs as story tellers, providing commentary towards the actions of players. The students looked at requirements for time between actions and LLM commentary, the perceived importance of the interaction and the difference between tonality of what the LLM has said. All four projects were made as VR experiences. The fifth project took an even more structured approach to how can the longer wait time between people speaking and an LLM processing the answer and replying to them be addressed. User interface additions like loading bars and indicators were compared to the addition of spoken words indicating thinking. The sixth project took the idea of utilizing LLMs as spectators of digital boardgames, where user performance and interest in the game was tested when a commentary from the spectators was present or not. Example visuals of the projects are given in Figure 2 and a thematic analysis of the projects in given in Table 2.
4. Post Project Questionnaires
After the projects a questionnaire was provided to all student participants in the groups, from those students 10 students provided feedback. We wanted to provide a self-reported overview of the learning goals and overall feedback from utilizing LLMs in the projects. The questionnaire consists of twelve 7-point Likert scale statements, to which students can agree or disagree and two open ended feedback questions. The questionnaire is given below.
Q1. Developing projects connected to LLMs has given us a better outlook on modern technology.
Q2. I have used the gained knowledge from the LLM projects into the next projects or my current work.
Q3. I see the use of LLMs in interactive systems and human-computer interaction as the next step of modern software design.
Q4. It took me a long time to understand LLM technology and how to best utilize it for the project.
Q5. I think LLMs and deep learning needs to be studied more as part of the curriculum.
Q6. I have more ideas of projects connected to LLMs.
Q7. I am interested in Machine Learning and Deep Learning after working on the LLM project.
Q8. The university is equipped for projects containing strong deep learning and LLM parts for interactive applications - provided software, hardware, infrastructure.
Q9. I use LLMs more in my work (ChatGPT, Gemini, Grok, etc.) after working on the project.
Q10. Using game engines like Unity or Unreal with LLMs is straightforward.
Q11. I feel that I am a better programmer after working on the project.
Q12. I better see all the possible problems with utilizing LLMs as part of interactive applications.
Q13. If you have other ideas for using LLMs for interactive applications, games, VR/AR, etc. please give a short overview or examples.
Q14. Feedback from the project/projects you did containing LLMs.
The results from the Likert scale questions of the questionnaire are given in Figure 3 as a column diagram. In addition, looking at specific questions connected to learning, we can see that with some exceptions most students felt positive for how working with LLM-centered projects have helped and motivated them (Figure 4).
5. Lessons Learned and Best Practices
The feedback from students on their projects demonstrated that they enjoyed working with this new technology and had the opportunity to delve deep into how it can be best compared, implemented and tested. It also shows the need for deeper education of students into not only using LLMs in their education, but how to best utilize them as part of larger projects. Multiple students expressed the opinion that that they were missing ground knowledge on how LLMs are constructed and how they work, leading them to a lot of trial and error and frustration. This was especially present with frustrations towards the prompt engineering required to make the LLM behave as needed. Answers such as the ones below shows the need for better LLM literacy as part of the curriculum, as students end up relying on online sources with conflicting or outright incorrect information.
* "Prompt engineering requires a lot more work than you may initially think. You can try to account for as many situations as possible, but during tests you will always be made aware of something you hadn't considered."
* "A lot of time spent prompt engineering, which might not lead to as much learning. (However, it clearly shows the weaknesses of the model)."
* "I really liked working with it for the murder mystery we did, but prompt engineering was quite an issue because there were no guidelines for how to create good prompts. We had to change the system prompt a lot in order to minimize hallucinations, but often it was random..."
Another problem pointed out by students is the need for more hardware resources. Four of the projects had to utilize online API services from OpenAI, Amazon and Google for their projects, because their own hardware was not sufficient to run both the interactive application and a local version of the LLM. This opens up problems with external expenses and the data privacy of users of the interactive applications, as these online LLM providers have been shown to utilize user data as part of training their models (Yao et al, 2024). Universities should extend the services that they provide for training of deep learning model, to providing easy access to installation and calling of LLM models in a secure way, until small enough models become good enough to satisfy different use cases.
The way current LLM integration in interactive applications is done is also a point of contention. Students expressed their frustration with the state-of-the-art where most projects implement LLMs as an afterthought or as a direct replacement of existing techniques and technologies. This makes it hard to explore truly innovative uses and makes projects feel redundant or overcomplicated. This shows from answers such as the ones presented below. This shows that LLM integration should be pushed to not just substitute traditional toolchains but open up new ones.
* "...LLM integration can feel like more of an afterthought to the overall experience. (like a separate chatbot within a normal game instead of a completely new experience where the LLM is more integral)".
* "LLMs are able to enhance understanding and provide quick feedback to users, how could this compare to traditional methods in interactive systems. what holes can an LLM fill and which are better left alone."
* "...I feel that some aspects of projects should consider if LLMs really are the best option, as I feel like it should not replace every type of handcrafted experiences or be used to replace more traditional ways of doing stuff."
The findings show that the work with LLMs can be beneficial to the development and interests of students, which aligns with the broader view on the technology as a development and design tool for exercises and projects in education (Kharrufa et al. 2024). Most research in pedagogy focus on LLMs as part of the exercise and course designing, evaluation or as helpers and teaching assistance to students (French et al., 2023), but we show that generative models can be even more powerful as development tools in the hands of students. This gives them a first-hand experience in the strengths and weaknesses of the technology.
6. Conclusion
In this paper, we presented the findings from proposing tightly integrated LLM semester projects in a PBL curriculum. In the span of four semesters 6 projects were undertaken spanning different use cases for LLMs from guides and conversational agents to story tellers and audience. All projects were completed, and prototypes were tested with users. As part of the projects research and development different local and online LLMs were tested, as well as integration with multimodal interactions like speech, gaze, body positioning, and gestures. After the completion of the projects, students were given a self-reporting questionnaire to better gage their perceived levels of understanding of programming, LLMs, deep learning and interest in the subjects. It was shown that overall students were very positive for their experience and felt that it resulted in learning new and useful skills. The negative sentiments from students were directed towards not enough teaching of deep learning and LLM in the curriculum, lack of appropriate hardware and better implementation tools. The findings from this research point at a growing need for better integration of statistics, deep learning and LLMs into curriculums that have not traditionally offered these disciplines, for students to better take advantage of the modern technology.
Ethics Declaration
The paper does not require an ethics clearance. No privacy breaching data has been gathered from participants.
AI Declaration
AI tools have been used only for spellchecking, punctuation and shortening/clarification of sentences. No fully AI generated text or images are used in this paper.
References
Bennedsen, J., & Caspersen, M. E. (2008). Exposing the programming process. Reflections on the Teaching of Programming: Methods and Implementations, 6-16.
Cardenas, L., Parajes, K., Zhu, M., & Zhai, S. (2024, January). Autohealth: Advanced llm-empowered wearable personalized medical butler for parkinson's disease management. In 2024 IEEE 14th Annual Computing and Communication Workshop and Conference (CCWC) (pp. 0375-0379). IEEE.
Constantinides, N., Constantinides, A., Koukopoulos, D., Fidas, C., & Belk, M. (2024, June). Culturai: Exploring mixed reality art exhibitions with large language models for personalized immersive experiences. In Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization (pp. 102-105).
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2024). GPTs are GPTs: Labor market impact potential of LLMs. Science, 384(6702), 1306-1308.
Estévez-Ayres, I., Callejo, P., Hombrados-Herrera, M. Á., Alario-Hoyos, C., & Delgado Kloos, C. (2024). Evaluation of LLM tools for feedback generation in a course on concurrent programming. International Journal of Artificial Intelligence in Education, 1-17.
Ferreira, J. M. M., & Qureshi, Z. I. (2020, April). Use of XR technologies to bridge the gap between Higher Education and Continuing Education. In 2020 IEEE Global Engineering Education Conference (EDUCON) (pp. 913-918). IEEE.
French, F., Levi, D., Maczo, C., Simonaityte, A., Triantafyllidis, S. and Varda, G., 2023. Creative use of OpenAI in education: case studies from game development. Multimodal Technologies and Interaction, 7(8), p.81.
Gallotta, R., Todd, G., Zammit, M., Earle, S., Liapis, A., Togelius, J., & Yannakakis, G. N. (2024). Large language models and games: A survey and roadmap. IEEE Transactions on Games.
Hutson, J. (2024). Combining Large Language Models and Immersive Technologies to Represent Cultural Heritage in the Metaverse Context. In Augmented and Virtual Reality in the Metaverse (pp. 265-281). Cham: Springer Nature Switzerland.
Iftikhar, S., Guerrero-Roldán, A. E., & Mor, E. (2022). Practice promotes learning: Analyzing students' acceptance of a learning-by-doing online programming learning tool. Applied Sciences, 12(24), 12613.
Jung, S. (2012). Experiences in developing an experimental robotics course program for undergraduate education. IEEE Transactions on Education, 56(1), 129-136.
Jury, B., Lorusso, A., Leinonen, J., Denny, P., & Luxton-Reilly, A. (2024, January). Evaluating llm-generated worked examples in an introductory programming course. In Proceedings of the 26th Australasian computing education conference (pp. 77-86).
Kapania, S., Wang, R., Li, T. J. J., Li, T., & Shen, H. (2024). " I'm categorizing LLM as a productivity tool": Examining ethics of LLM use in HCI research practices. arXiv preprint arXiv:2403.19876.
Khan, S. R., Chandak, V., & Mukherjea, S. (2025). Evaluating LLMs for visualization generation and understanding. Discover Data, 3(1), 15.
Kharrufa, A., Alghamdi, S., Aziz, A. and Bull, C., 2024. LLMs Integration in Software Engineering Team Projects: Roles, Impact, and a Pedagogical Design Space for AI Tools in Computing Education. arXiv preprint arXiv:2410.23069.
Letteri, I., & Vittorini, P. (2024). Exploring the Impact of LLM-Generated Feedback: Evaluation from Professors and Students in Data Science Courses. In International Conference in Methodologies and intelligent Systems for Techhnology Enhanced Learning (pp. 11-20). Cham: Springer Nature Switzerland.
Maenpaa, H., Varjonen, S., Hellas, A., Tarkoma, S., & Mannisto, T. (2017, May). Assessing IOT projects in university education-A framework for problem-based learning. In 2017 IEEE/ACM 39th International Conference on Software Engineering: Software Engineering Education and Training Track (ICSE-SEET) (pp. 37-46). IEEE.
Nikolov, I., (2024). Charting a New Course: Encouraging Programming Class Participation and Interest Through Multimedia and Flipped Classroom Activities. In International Conference in Methodologies and intelligent Systems for Techhnology Enhanced Learning (pp. 234-245). Cham: Springer Nature Switzerland.
Pang, R. Y., Schroeder, H., Smith, K. S., Barocas, S., Xiao, Z., Tseng, E., & Bragg, D. (2025). Understanding the LLM-ification of CHI: Unpacking the Impact of LLMs at CHI through a Systematic Literature Review. arXiv preprint arXiv:2501.12557.
Park, S. H., & Ertmer, P. A. (2007). Impact of problem-based learning (PBL) on teachers' beliefs regarding technology use. Journal of research on technology in education, 40(2), 247-267.
Qiu, J., Lam, K., Li, G., Acharya, A., Wong, T. Y., Darzi, A., ... & Topol, E. J. (2024). LLM-based agentic systems in medicine and healthcare. Nature Machine Intelligence, 6(12), 1418-1420.
Rajashekar, N. C., Shin, Y. E., Pu, Y., Chung, S., You, K., Giuffre, M., ... & Shung, D. (2024). Human-algorithmic interaction using a large language model-augmented artificial intelligence clinical decision support system. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-20).
Sidji, M., Smith, W., & Rogerson, M. J. (2024). Human-ai collaboration in cooperative games: A study of playing codenames with an llm assistant. Proceedings of the ACM on Human-Computer Interaction, 8(CHI PLAY), 1-25.
Song, T., Zhang, H., & Xiao, Y. (2024). A High-Quality Generation Approach for Educational Programming Projects Using LLM. IEEE Transactions on Learning Technologies.
Ta, N. B. D., Nguyen, H. G. P., & Gottipati, S. (2023). ExGen: Ready-to-use exercise generation in introductory programming courses. In International Conference on Computers in Education. undreamai (2025). GitHub - undreamai/LLMUnity: Create characters in Unity with LLMs! [online] GitHub. Available at: https://github.com/undreamai/LLMUnity Accessed 29 Aug. 2025].
Wan, H., Zhang, J., Suria, A. A., Yao, B., Wang, D., Coady, Y., & Prpa, M. (2024, May). Building llm-based ai agents in social virtual reality. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1-7).
Wang, Z., Yuan, L. P., Wang, L., Jiang, B., & Zeng, W. (2024). Virtuwander: Enhancing multi-modal interaction for virtual tour guidance through large language models. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-20).
Wekerle, C., Daumiller, M., & Kollar, I. (2022). Using digital technology to promote higher education learning: The importance of different learning activities and their relations to learning outcomes. Journal of Research on Technology in Education, 54(1), 1-17.
Wu, X., Duan, R., & Ni, J. (2024). Unveiling security, privacy, and ethical concerns of ChatGPT. Journal of Information and Intelligence, 2(2), 102-115.
Yan, W., Nakajima, T., & Sawada, R. (2024). Benefits and challenges of collaboration between students and conversational generative artificial intelligence in programming learning: an empirical case study. Education Sciences, 14(4), 433.
Yang, Z., Xu, X., Yao, B., Rogers, E., Zhang, S., Intille, S., ... & Wang, D. (2024). Talk2care: An llm-based voice assistant for communication between healthcare providers and older adults. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 8(2), 1-35.
Yao, Y., Duan, J., Xu, K., Cai, Y., Sun, Z., & Zhang, Y. (2024). A survey on large language model (llm) security and privacy: The good, the bad, and the ugly. High-Confidence Computing, 100211.
Yong, Q. R., & Mitchell, A. (2023, October). From playing the story to gaming the system: Repeat experiences of a large language model-based interactive story. In International Conference on Interactive Digital Storytelling (pp. 395-409). Cham: Springer Nature Switzerland.
Zhang, J., Xiang, R., Kuang, Z., Wang, B., & Li, Y. (2024). ArchGPT: harnessing large language models for supporting renovation and conservation of traditional architectural heritage. Heritage Science, 12(1), 220.
Zhang, L., & Zhang, W. (2024). Integrating large language models into project-based learning based on self-determination theory. Interactive Learning Environments, 1-13.
Copyright Academic Conferences International Limited 2025