Content area
The AI Exploratorium is an interactive, gamified learning environment set in a physical space, designed to develop AI literacy among high-school students. Through hands-on challenges, students train AI models, analyse their outcomes, and reflect on ethical implications. AI Exploratorium introduces key AI concepts for selected areas of AI (e.g. machine learning for image recognition), in the context of real-world use cases. It builds on experiential learning, adding gamification principles, and a spatial design of an interactive exhibition including digital and analogue materials. It is based on a structured AI literacy framework, developed and refined iteratively. The implementation is based on self-directed exploration and problem-based learning, and includes four main stations each presenting a challenge (an "AI puzzle"). In the first challenge participants train an image recognition AI model for a simulated autonomous car. They choose and label training data and test their model in a simulated test drive, competing to be the winning team. Next challenges add new insights; e.g. a deepfake detection card game introduces AI image generation. The "catch me if you can" challenge addresses AI in public surveillance, deepening the topic of ethical issues in AI use. In the final challenge participants develop their own AI application (no coding skills needed) for a personally meaningful purpose. Reflection quizzes after each challenge reinforce learnings acquired. A projected visualization of a "black box of AI" displays key learnings, initially concealed and gradually revealed with each solved challenge. A pilot test evaluation included semi-structured observation, short interviews, and a questionnaire assessing engagement and knowledge acquisition. Preliminary findings indicate that the AI Exploratorium effectively enhances students' understanding of AI concepts while fostering critical reflection on ethical considerations. This approach emphasizes the role of an exhibition-like learning environment in developing critical AI literacy, and aims to spark discussions on making AI concepts experienceable and AI literacy programmes implementable in different learning environments.
Abstract: The AI Exploratorium is an interactive, gamified learning environment set in a physical space, designed to develop AI literacy among high-school students. Through hands-on challenges, students train AI models, analyse their outcomes, and reflect on ethical implications. AI Exploratorium introduces key AI concepts for selected areas of AI (e.g. machine learning for image recognition), in the context of real-world use cases. It builds on experiential learning, adding gamification principles, and a spatial design of an interactive exhibition including digital and analogue materials. It is based on a structured AI literacy framework, developed and refined iteratively. The implementation is based on self-directed exploration and problem-based learning, and includes four main stations each presenting a challenge (an "AI puzzle"). In the first challenge participants train an image recognition AI model for a simulated autonomous car. They choose and label training data and test their model in a simulated test drive, competing to be the winning team. Next challenges add new insights; e.g. a deepfake detection card game introduces AI image generation. The "catch me if you can" challenge addresses AI in public surveillance, deepening the topic of ethical issues in AI use. In the final challenge participants develop their own AI application (no coding skills needed) for a personally meaningful purpose. Reflection quizzes after each challenge reinforce learnings acquired. A projected visualization of a "black box of AI" displays key learnings, initially concealed and gradually revealed with each solved challenge. A pilot test evaluation included semi-structured observation, short interviews, and a questionnaire assessing engagement and knowledge acquisition. Preliminary findings indicate that the AI Exploratorium effectively enhances students' understanding of AI concepts while fostering critical reflection on ethical considerations. This approach emphasizes the role of an exhibition-like learning environment in developing critical AI literacy, and aims to spark discussions on making AI concepts experienceable and AI literacy programmes implementable in different learning environments.
Keywords: Critical AI literacy, Experiential learning, Interactive exhibition, Gamification
1. Introduction and Theoretical Framework
The rapid spread of AI in society calls for developing AI literacy among citizens, particularly young people. Recent research shows that 62% of teenagers in Germany use AI, mostly for homework, fun and information (Mpfs, 2024). Despite using AI systems in their daily lives, they often lack the understanding needed to recognise these interactions, comprehend underlying mechanisms or grasp ethical issues (Long et al, 2021). This gap in AI literacy highlights the need for effective and inclusive learning experiences (Kasinidou, 2023) and competences to "critically evaluate AI technologies" (Long and Magerko, 2020) as a crucial addition to purely technical fundamentals.
Critical AI literacy involves the capacity for critical reflection and understanding of risks of inadequate AI use. Empowering learners to critically evaluate AI systems, understand their role in their lives, and to challenge them is crucial (Velander et al, 2024). Addressing ethical concerns and societal impacts, often under-investigated in AI education for youth (Zhou et al, 2020), is central to this approach.
Developing AI literacy interventions for high-school students, especially those addressing abstract, ethical, and critical dimensions, presents big challenges. The disparity between young peoples' everyday experiences and AI's technical complexities contributes to this difficulty. Complex technical and abstract concepts can be difficult for young learners, particularly those without prior knowledge in computer science, requiring appropriate teaching tools and pedagogy to scaffold learners' understanding of AI (Long & Magerko, 2020).
To overcome these challenges and create accessible and engaging learning experiences that simplify and demystify AI concepts for young learners, several pedagogical approaches and design considerations have been proposed (Sanusi et al, 2022). Experiential learning approaches, especially those including gamification, offer a promising direction. A systematic literature review of tools and interventions for teaching AI competences (Ng et al, 2023) found several studies that applied experiential learning through project-based activities, including problem-solving and tinkering, in which learners create AI artefacts. Another important element is social interaction and collaboration, with peer discussion helping to incite reflection (Dangol et al, 2024).
Gamified elements can be easily integrated into experiential learning. Gamification, defined as the application of game design elements to non-game contexts (Deterding, 2011), strives to enhance intrinsic motivation (De- Marcos, 2014). Incorporating elements such as points, leaderboards, and rewards can create a sense of progress and sustain interest, making complex concepts more approachable (Ng et al, 2024). Drawing on theories like self-determination theory (SDT), the motivational power of gamification can be explained through satisfaction of three psychological needs - autonomy, competence and relatedness (Ryan & Deci, 2000). Informal learning spaces, similar to those of museums and interactive exhibitions, combining experiential learning with gamification, could well address these needs. They can offer self-directed exploration, where learners follow their own interests, supporting autonomy. Directly experienceable learnings contribute to competence. They easily integrate collaboration and social interaction, satisfying the need for relatedness. They also relate to creativity and embodied interaction as other core principles of informal learning spaces (Long et al, 2021a). Embodied interaction (described as "the creation, manipulation, and sharing of meaning through engaged interaction with artefacts" (Dourish, 2004)), can help make abstract concepts tangible, lower entry barriers, and drive engagement (Long et al, 2021b). Multimodal artefacts (objects, texts, images, space) create enticing atmosphere and convey learnings (Kampschulte, 2015).
In spite of the promise of these strategies and successful examples in different domains, they have not yet been applied to the development of critical AI literacy. In this paper, we describe our approach to addressing this gap by designing a gamified interactive exhibition for developing critical AI literacy for youth, termed the AI Exploratorium. The focus is on its conceptual design and rationale, and a prototypical implementation that has been tested in a real-world pilot with users. The preliminary insights from the pilot test are also discussed, including their limitations.
2. Conceptual Design of the AI Exploratorium
The main goal of AI Exploratorium was to create an interactive, informal learning environment in physical space, where participants develop critical AI literacy by engaging with AI models in real-world application contexts. This should engage and enable them to develop key competences for a critically informed understanding and use of AI. This includes three main objectives: i) getting to know and understand the fundamentally probabilistic nature of AI systems based on patterns from historical data, ii) understanding basic training principles of AI systems and main factors influencing the reliability of results, iii) enabling reflection on the trade-offs between opportunities and harms of everyday AI use. To achieve these, a set of specific key competences was selected from a structured AI literacy competence framework, developed in the iKIDO project ((iKIDO Web); early version in (Gnoth & Novak, 2025)).
Experiential learning (Kolb, 2014) and problem-based learning (Sanusi et al, 2022) serve as didactical frameworks for designing interactive experiences, allowing participants to explore, experiment with and reflect upon the main working principles of AI systems normally hidden from users. The self-determination theory (Ryan & Deci, 2000) was combined with gamification, to design engaging digital and analogue artefacts for different motivational mechanisms, guiding attention, and supporting reflection and autonomy. The core learning design contains a set of interactive experiments (challenges), designed as self-directed explorations with minimal facilitator guidance. Each challenge allows the participants to explore main principles, risks and limitations of AI through real use-cases (e.g. training an AI model to enable an autonomous car to recognize pedestrians, signs and cyclists), embedded in a gamified, playful scenario (e.g., a puzzle to be solved, a game to be played). We selected AI use cases and topics relatable to high-school students. In each challenge participants go through cycles of the four stages of experiential learning: i) concrete experience, ii) reflective observation, iii) conceptual internalisation of the observations and iv) active experimentation. To stimulate reflection, interventions for individual and group reflection are combined with co-operative and competitive gamification (e.g. team quests and quizzes, team competitions). Design thinking methods support constructivist learning where participants generate and implement own ideas (e.g. build an own AI application).
The informal and explorative character of the learning experience, as well as engagement through embodied interaction (Long et al, 2021a), are emphasised by designing the learning environment as an interactive exhibition, where each challenge is represented as a station in physical space, accompanied by physical artefacts designed to guide attention and stimulate learning (posters, objects). Designing these exhibition-like artefacts was informed and inspired by examples from museums, curators, designers and interactive exhibition spaces, regarding practical experiences, materials, and interaction mechanics (Cole, 2019; Middleton, 2024; Taylor, 2021). The design process was particularly informed by the scientific framework for designing public interaction spaces that include AI, proposed by Long et al. (2019). Accordingly, the AI Exploratorium was designed to be flexible and adaptable: all parts can be easily moved and partially modified, to accommodate the constraints of different physical spaces and participant needs. We prioritized accessible, easily installed solutions that can be maintained without special expertise and with a limited budget. All files, software, and materials are easy to download, set-up and print out, to ensure AI Exploratorium is reproducible by educators from various communities.
To make AI systems transparently experienceable we introduced an especially developed didactical tool, the AI Workbench (Laufer & Novak, 2025), a no-code tool containing a pretrained image-recognition AI model, which can be fine-tuned via training for building a specific AI application. It provides customisable multimedia outputs, and a dual view interface, demonstrating the problem of AI opacity by enabling users to explore both the developer and user perspective. We balanced highly interactive elements with more static ones to prevent overwhelming the participants, and used familiar interactions (e.g. card game). Collaboration and competition were balanced to encourage social interaction. Facilitator guidance was deliberately limited (e.g. onboarding, questions, technical support) to not disrupt the self-directed experience.
The spatial arrangement of the stations (Figure 1), areas and accompanying artefacts (e.g. posters) was designed to stimulate self-directed exploration with clear wayfinding (colour-coded posters and table markers for each station), creating a cohesive experience while allowing individual pathways.
A range of gamification techniques addresses different user types responding different motivational drivers, informed by Bartle's Player Types model (Bartle, 1996) and Falk's Taxonomy of Museum Visitors (Falk, 2016). Explorers' curiosity and eagerness for discovery are addressed by interactive artefacts enticing hands-on engagement, the gradual revealing of elements and information, and additional activities outside the main flow. Challenges solved in pairs or small teams engage socialisers. Achievers are motivated by collecting points, competing on leaderboards and solving interactive quizzes. Competitive elements talk to so called killers. Additional learning resources (also available after the workshop) resonate with professional hobbyists. Visually appealing posters talk to experience seekers, while a quiet sitting area with snacks cares for rechargers.
3. Prototypical Implementation: Challenges and Artefacts
The AI Exploratorium prototype features four challenges, each with its station in the exhibition space and three additional stations (Side Quest, Relax Zone, Projection Visualisation). Upon arrival, participants are onboarded. A facilitator explains how AI Exploratorium works, briefly asks about participants' motivation, expectations and previous knowledge, and hands out a Participant Card. The card contains a statement for each challenge on which participants express their opinion using coloured stickers before they start, and again after each challenge. The card also contains instructions on how the AI Exploratorium works. Once onboarded, everyone begins with the same challenge - Autonomous Driving - ensuring a shared basic understanding of training AI models for image recognition. Afterwards, they choose their next challenges. Each challenge is accompanied by a large visual poster showing three main steps in solving it, and a printout with detailed instructions. After each challenge, participants solve a short Reflection Quiz, unlocking a hint with which they reveal a part of the Projection Visualisation, which serves as a metaphor for AI being a "black box", parts of which they discover (Figure 2). Each part provides a resource (e.g., game, video, interactive story) for learning more. At the end, all hidden parts are revealed and the resources stay accessible via a QR code.
3.1 Challenge 1 - Autonomous Driving
In the first challenge, participants train an image recognition model for a self-driving car using AI Workbench (Figure 3). A training dataset of 50 traffic-related photos (cars, buses, pedestrians, street signs, etc.) (Cordts et al, 2016) is provided. Participants label and upload the training data, test their model with test images and improve it Working in pairs or small teams, they discuss insights and model's mistakes. They also test other teams' models by selecting tricky images which they deem the model would misclassify, like oddly angled shots. Finally, they see their AI model perform in a simulated driving test: an interface showing a car cockpit from the driver's perspective with traffic images on which the model is tested. Test scores are displayed on a leaderboard.
3.2 Challenge 2 - Catch Me if You Can (CMIYC)
This challenge combines both analogue and digital artefacts (Figure 4) in a gamified narrative: a fictional character, Jonas, is hiding in a city filled with surveillance cameras. Using the AI Workbench, participants train a model to recognize Jonas based on a provided dataset containing 50 photos. They sort and label the photos and train the model so that it distinguishes photos with and without Jonas. They test and improve it, until content with the outcome. Then, their model processes a set of prepared photos from surveillance cameras in the city to find where Jonas was seen. Using a printed map of the city with marked cameras, they trace Jonas's path by placing pins where the model found him. Participants compete against eachother to find Jonas first.
3.3 Challenge 3 - Deepfake or not?
This challenge introduces ethical issue of AI image generation through a card game (Figure 5). Each card contains an image on one side, and a coloured dot on the back. The images stem from two publicly available databases - one of real people, and the other of deepfakes (Karras et al, 2019). All photos centre on the person's face. We picked 37 deepfakes, and 38 real photos, balanced for diversity in age, appearance, culture, and gender. Deepfake selection balanced easily recognisable ones with harder-to-spot examples. Participants face off in pairs, each sorting their deck into "deepfake" and "not deepfake" stacks in 30 seconds. The time limit reinforces the gamified aspect and reflects real-world conditions where often only a few seconds are granted to process an image seen online. Answers are checked using a key in an envelope to decode the coloured dots. A looking glass allows one to inspect details that betray deepfakes, sparking discussion about detection tactics.
3.4 Challenge 4 - Free Experimentation
Here, participants build an own image recognition AI model for a purpose they choose (Figure 6). A Feasibility Checklist helps them develop an idea for an AI application, check its purpose, and the feasibility of implementation with the AI Workbench. They collect training data from online sources, train the model, define the output for each recognized image class (text, image, audio or video), test and improve before other teams try it out.
3.5 Side Quest
This part requires lower level of cognitive effort and focuses on play by providing two AI games based on image recognition: the Rock-Paper-Scissors (iKIDO Web), where students train a model to recognize three hand gestures and use it to play against the computer, and Google's Quickdraw (Google, 2016), where they sketch prompted images, while the AI system guesses the drawings.
4. Pilot Test
4.1 Methodology
The pilot test took place in March 2025 with 7 male participants, aged 12 - 14. Participation was voluntary, which likely led to self-selection. Due to the lack of female participants and a small sample, the results should be interpreted with caution, recognizing the limitations. Follow-up studies should use better participant recruitment strategies, so to attract a more diverse sample.
We observed the interaction, collected questionnaire responses, and conducted short semi-structured interviews. Signed parental consent forms were collected, and participants consented to take part in the activities. Assessment methods were adapted to avoid interfering with the experience, while being suitable for the age group.
The observational protocol included three areas: general engagement (e.g., "Are participants actively participating? Are they distracted or bored?"), proactiveness (e.g., "Do participants explore challenges independently?"), and group dynamics (e.g., "How do they interact with each other and facilitators?"). Pain points and improvement areas (e.g., technical difficulties, misunderstandings) were noted. A semi-structured participant observation approach was used, conducted by two observers. Participants also completed a short questionnaire with 12 questions about perceived helpfulness of the AI Exploratorium for developing AI competences. Responses were collected on a 5-point Likert scale, from "not at all helpful" to "extremely helpful". Participants were then briefly interviewed regarding their experience, the learnings, the artefacts, and suggestions for improvement.
4.2 Results
Combining observation with interview feedback provided deeper insights into participants behaviour and subjective experience. Initially curious but hesitant, participants sought facilitator guidance rather than using written instructions. Progressing, they became more self-directed. In interviews, one participant explicitly appreciated "that we could do so much by ourselves", highlighting the value of autonomy in our learning design.
The spatial layout with color-coded stations effectively supported self-directed movement. The main posters were used for a quick overview of a challenge and the Fun Facts Posters when seeking an answer to a quiz question. Without knowing each other, participants readily formed team pairs persisting across challenges. Within pairs, they discussed strategies for refining their AI models, and discussed tactics across teams. They were especially enthusiastic comparing results, showing the value of competitive elements. One participant wished to "visit the AI Exploratorium with my friends and create AI models with them", confirming the role of collaboration for engagement.
We noticed slight differences in participants' approaches across challenges. In the Autonomous Driving challenge, most were highly concentrated, asking many questions, and seeking guidance; likely because they were familiarizing themselves with the tool, and the principles of training an image recognition model. The challenge Deepfake... or not? stood out as particularly memorable. Participants replayed the card game multiple times, improving at spotting deepfakes, and understanding the AI misuse potential. One participant commented in the interview: "AI could have positive application potential, but we also learned how it can be heavily misused and can have negative aspects". The CMIYC challenge was much appreciated for its gamified narrative element. One participant suggested adding a story to other challenges to make them even more engaging. The Free Experimentation challenge initially required patience to find a feasible idea, but then spurred creativity-driven learning leading to satisfaction with the problem-solving process. The Relax Zone was appreciated for offering a break and informal conversation, while the Side Quest was popular as a relaxed activity, sparking laughter and friendly competition.
Participants engaged thoughtfully with the Reflection Quizzes, not simply guessing the answers. They mostly answered correctly, indicating that the quizzes worked as learning reinforcement. In the Projection Visualisation, they were interested in revealing the hidden parts after a challenge. Some asked where they could later access the revealed resources, showing authentic curiosity. For both quizzes and the Projection Visualisation, occasional reminders were needed, to which participants reacted positively. They often asked for more stickers to reply to the statements on the Participant Card after a given challenge, with stickers acting as badges for a successfully solved challenge.
Regarding conceptual understanding, when confronted with a misclassification by an AI model, participants correctly identified potential causes, such as visual similarities between different categories, demonstrating understanding of pattern recognition principles. Interview data strongly supports this observation: one participant, who initially thought "AI is something too complex for me to understand" discovered they could "create different AI models for different purposes", while acknowledging that this typically requires coding skills. Several participants mentioned different AI use cases they have learned about, indicating successful demystification of AI. Questionnaires filled out by six of seven participants support impressions from the interviews. Since the sample is too small for quantitative analyses, we outline the answer tendency. All participants reported that the AI Exploratorium was either very or extremely helpful for understanding what is meant by the term AI, and for understanding that an AI system cannot know by itself, what an object is, but must be trained, as well as for understanding the risks of deepfakes. Five out of six participants found the AI Exploratorium to be very or extremely helpful for understanding ethical aspects in the development and the use of AI systems, and for understanding problems that could arise from the lack of transparency of AI systems (one participant found it to be somewhat helpful for these). All seven interviewees expressed enthusiasm for the workshop design, with no improvement suggestions when asked for it.
5. Discussion and Conclusions
Presented work contributes to research on conceptualisation and design of interactive, gamified, and informal learning spaces, in this case for developing critical AI literacy for youth. The results of the pilot test provide indication for the potential of our concept and prototype to successfully guide attention, drive motivation and spark reflection throughout the learning experience. Self-reported assessments suggest promising potential for increasing key competences of AI literacy. To more effectively evaluate the efficacy of our approach, a more robust study design is needed, using more objective measures of acquired AI knowledge. Since the primary goal of this evaluation was to understand how participants interact with different artefacts, react to gamified elements and explore the physical space, primacy was given to more nonintrusive, qualitative methods, which would not intervene with participants' experience to assure higher ecological validity.
A mechanism that stood out was the self-directedness: participants could effectively choose activities at their own pace, and the activities allowed for experimentation with their own ideas, supporting their autonomy. An important element was also collaboration, allowing participants to exchange ideas and achieve goals together, while competing with other teams. This combination of collaboration and competition seemed to support social motivation, fulfilling the need for relatedness. Various gamification elements facilitated participants' experience, allowing them to complete challenges, gain points, compete on leaderboards, solve riddles and reveal hidden parts of the black box of AI. Observational data and interview responses point out the role of described elements in both a better perceived understanding of the underlying mechanisms of machine learning, and in higher selfefficacy in interacting with AI, grasping its potentials, risks and consequences of uncritical use. Contextualising the learnings within real world AI use cases and providing accessible, no-code tools to explore how image recognition AI works in various applications, provided a low entry barrier for engaging with AI and developing AI literacy.
The varied design of different challenges focusing on different topics, and requiring different interaction styles, combining analogue and digital artefacts, successfully kept the participants engaged. They were curious to explore, and eager to repeat activities such as the Deepfake card game. We assumed that the younger generation would prefer digital artefacts, but analogue games engaged them just as well, especially when wrapped in a story, such as the CMIYC challenge. In contrast, reading instructions remained unattractive for this age-group, with a clear preference for verbal explanations. This suggests that the combination of different media formats seems to better spark engagement in younger generation, and dynamic, interactive objects need to be carefully integrated within the experience, so that important information provided as text grabs their attention. It also shows that adding limited facilitator guidance (onboarding, questions, technical support) to selfdirectedness was a good choice. Although this test obtained positive results, the limitations of our learnings need to be pointed out. The small sample does not allow for generalizable conclusions, even more so due to an unsatisfactory gender representation, lacking female participants. Future research should consider ways to framing the learning experiences to attract female participants as well, or find ways to avoid self-selection and acquire a more diverse and representable sample. Additionally, a more objective measurement of AI competences could give a clearer understanding of the effectiveness of such approaches.
This work presents a preliminary exploration of the potential of experiential, gamified learning in addressing the challenges of teaching abstract, technical concepts. We have introduced the concept and prototype of an informal, exhibition-like learning space, integrating experiential learning with gamification elements, to demystify complex AI concepts, while supporting critical reflection on ethical implications. The developed design and prototype of the AI Exploratorium offer a blueprint for educators looking to facilitate critical AI literacy through playful approaches in various educational settings. Future work should focus on further investigating the effectiveness of this approach for diverse learner populations and integrating more rigorous and objective assessment methods to measure (long-term) impact on developing AI literacy. For this, we will make all the materials, data, and guides for organising and conducting the AI Exploratorium freely available online (iKIDO Web).
Acknowledgements
This work has been funded by the German Federal Ministry for Family Affairs, Senior Citizens, Women and Youth in the iKIDO project (Grant Number: 3923406K01). We thank Boryana Krasimirova for her contributions to the visual/interaction design of gamified elements and MINT Impuls in Berlin for the space and logistical support.
Ethics Declaration: Ethical clearance was not required. Consent forms, with information about the project, were signed by the participants' parents.
AI Declaration: In creation of this paper, AI tool Grammarly was used for grammar checks.
References
Bartle, R. (1996). Hearts, clubs, diamonds, spades: Players who suit MUDs. Journal of MUD research, 1(1), p.19.
Cole, B.E. (2019) "I Make Exhibits", [online], Contingent Magazine,<https://contingentmagazine.org/2019/03/20/i-makeexhibits/
Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S. and Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In Proc. of the IEEE conf. on computer vision and pattern recognition (pp. 3213-3223).
Dangol, A., Newman, M., Wolfe, R., Lee, J. H., Kientz, J. A., Yip, J., & Pitt, C. (2024). Mediating Culture: Cultivating Sociocultural Understanding of AI in Children through Participatory Design. Proc. ACM DIS'24. (pp. 1805-1822).
De-Marcos, L., Domínguez, A., Saenz-de-Navarrete, J., & Pagés, C. (2014). An empirical study comparing gamification and social networking on e-learning. Computers & education, 75, 82-91.
Deterding, S., Sicart, M., Nacke, L., O'hara, K., & Dixon, D. (2011). Gamification. using game-design elements in non-gaming contexts. In CHI'11 ext. abstr. on human fact. in comp. sys. (pp. 2425-2428).
Dourish, P. (2004). Where the action is: the foundations of embodied interaction. MIT press.
Falk, J.H. (2016). Identity and the Museum Visitor Experience. Routledge.
Gnoth, S. and Novak, J. (2025). Supporting AI Literacy Through Experiential Learning: An Exploratory Study. In Proc. of HCII 2025, LNCS 15806, Vol. 41, Springer, 2025
Google (2016). Quick, Draw! [online] Available at: https://quickdraw.withgoogle.com/.
iKIDO Web (2025). iKIDO Project. [online] Available at: https://ikido.info [Accessed 30 Apr. 2025].
Kampschulte, L. and Parchmann, I., 2015. The student-curated exhibition-A new approach to getting in touch with science. LUMAT: Int. Journal on Math, Science and Tech. Education, 3(4), pp.462-482.
Kasinidou, M. (2023). AI literacy for all: A participatory approach. In Proc. of the 2023 Conf. on Innovation and Technology in Computer Science Education V. 2 (pp. 607-608).
Karras, T., Laine, S. and Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proc. of the IEEE/CVF conf. on computer vision and pattern recognition (pp. 4401-4410).
Kolb, D.A., Boyatzis, R.E. and Mainemelis, C., 2014. Experiential learning theory: Previous research and new directions. In Perspectives on thinking, learning, and cognitive styles (pp. 227-247). Routledge.
Laufer, J. and Novak, J. (2025). The AI Workbench - An Interactive No-Code Tool for Fostering AI Literacy. Proc. of HCII 2025, LNCS 15806, Vol. 41, Springer, 2025
Long, D., Jacob, M. and Magerko, B. (2019). Designing co-creative AI for public spaces. In Proc. of the 2019 Conf. on Creativity and Cognition (pp. 271-284).
Long, D. and Magerko, B. (2020). What is AI Literacy? Competencies and Design Considerations. Proc. of the 2020 CHI Conf. on Human Factors in Computing Systems, pp.1-16.
Long, D., Padiyath, A., Teachey, A., & Magerko, B. (2021a). The role of collaboration, creativity, and embodiment in AI learning experiences. In Proc. 13th Conf. on Creativity and Cognition (pp. 1-10).
Long, D., Blunt, T. and Magerko, B. (2021b) Co-designing AI literacy exhibits for informal learning spaces. Proc. of the ACM on Human-Computer Interaction, 5(CSCW2), pp.1-35.
Mpfs - Medienpädagogischer Forschungsverbund Südwest. (2024). JIM-Studie 2024. [online] Available at: https://mpfs.de/studie/jim-studie-2024/.
Middleton, M. (2024) "Playful Exhibition Design for Everyone", [online], MuseumNext. https://www.museumnext.com/article/playful-exhibition-design-for-everyone/
Ng, D. T. K., Lee, M., Tan, R. J. Y., Hu, X., Downie, J. S., & Chu, S. K. W. (2023). A review of AI teaching and learning from 2000 to 2020. Education and Information Technologies, 28(7), 8445-8501.
Ng, D. T. K., Xinyu, C., Leung, J. K. L., & Chu, S. K. W. (2024). Fostering students' AI literacy development through educational games: AI knowledge, affective and cognitive engagement. J. Comp. Assist. Learn., 40(5), 2049-2064.
Ryan, R.M. and Deci, E.L. (2000). Self-determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-being. American Psychologist, 55(1), 68-78.
Sanusi, I.T., Oyelere, S.S., Vartiainen, H., Suhonen, J. and Tukiainen, M. (2022). A systematic review of teaching and learning machine learning in K-12 education. Edu. and Informat. Techn., 28.
Taylor, J. (2021) "How User Centred Design Can Help Museums Put People at the Centre of the Exhibition Design Process", [online], MuseumNext. https://www.museumnext.com/article/how-user-centred-design-can-help-museum-putpeople-at-the-centre-of-the-exhibition-design-process/
Velander, J., Otero, N., & Milrad, M. (2024). What is critical (about) AI literacy? Exploring conceptualizations present in AI literacy discourse. In Framing Futures in Postdigital Education: Critical Concepts for Data-driven Practices (pp. 139- 160). Cham: Springer Nature Switzerland.
Zhou, X., Van Brummelen, J., & Lin, P. (2020). Designing AI learning experiences for K-12: Emerging works, future opportunities and a design framework. arXiv preprint arXiv:2009.10228.
Copyright Academic Conferences International Limited 2025