Headnote
ABSTRACT
Artificial intelligence (AI) literacy has become an essential competency in higher education across disciplines, yet the teaching approaches and content requirements differ significantly between STEM and humanities fields. This mixed-methods study investigates these differences, focusing on the pedagogical strategies, AI literacy needs, and institutional gaps that exist between the two domains. A quasi-experimental design was applied using a structured questionnaire with 25 university students (12 from STEM and 13 from humanities). Quantitative data were analyzed through descriptive statistics, while qualitative data were examined using thematic analysis. The findings reveal that STEM students prioritize technical skills such as programming and algorithmic logic, whereas humanities students emphasize conceptual understanding, ethical reasoning, and the social impact of AI. Both groups express concern over insufficient institutional support for comprehensive AI training. The study identifies the need for adaptable, discipline-specific AI curricula and advocates for interdisciplinary learning environments that balance technical and ethical components. This research fills a gap in current literature by empirically comparing AI literacy frameworks across distinct academic traditions and proposes evidence-based recommendations for inclusive AI curriculum development.
RESUMEN
La necesidad de alfabetización en inteligencia artificial (IA) se ha convertido en un aspecto fundamental de la educación superior, pero las diferentes disciplinas STEM y humanidades presentan diferentes necesidades de formación y contenido. El estudio examina los estándares de alfabetización en IA y los métodos pedagógicos para estos campos académicos mediante métodos cuantitativos y cualitativos. Un diseño de investigación cuasiexperimental utilizó cuestionarios para 25 estudiantes universitarios, de los cuales 12 pertenecían a campos STEM y 13 a estudios humanísticos. El estudio revela que los estudiantes STEM necesitan competencias técnicas en IA, mientras que los estudiantes de humanidades se centran en la comprensión de los conceptos conceptuales de IA y los efectos éticos y sociales de la inteligencia artificial. Los métodos utilizados para impartir los diferentes materiales difieren, ya que los programas STEM se centran en experiencias de programación y formación en desarrollo de algoritmos, mientras que los cursos de humanidades enseñan habilidades analíticas y conocimientos multidisciplinarios. Los estudiantes de ambos ámbitos identifican la formación insuficiente en IA como deficiente en sus programas académicos. La investigación apoya las clases interdisciplinarias de IA, combinando la instrucción presencial con enfoques de aprendizaje en línea para cerrar esta brecha en la alfabetización en IA. La información recopilada respalda la investigación en educación en IA para desarrollar nuevos estándares curriculares y estrategias gubernamentales que mejoren la competencia en IA en todos los campos académicos.
KEYWORDS | PALABRAS CLAVE
AI Literacy, STEM Education, Humanities, Interdisciplinary Learning, Teaching Strategies, Curriculum Development, Ethics in AI.
Alfabetización en ia, educación stem, humanidades, aprendizaje interdisciplinario, estrategias de enseñanza, desarrollo curricular.
1. Introduction
1.1. Background
The increasing integration of artificial intelligence (AI) into all sectors of society has necessitated a reevaluation of how higher education institutions prepare students to interact with and critically assess AI technologies. AI literacy encompasses not only technical proficiency in coding and data manipulation but also ethical reasoning, socio-cultural awareness, and interdisciplinary competence. While AI is traditionally associated with STEM disciplines due to its computational nature, recent developments in digital humanities, algorithmic bias research, and creative AI applications have positioned the humanities as critical participants in AI discourse (Wu et al., 2021). Despite this, educational programs often remain compartmentalized, with STEM students lacking ethical grounding and humanities students lacking technical training. Addressing this imbalance requires a comprehensive understanding of discipline-specific needs and the development of integrated AI literacy frameworks that reflect diverse educational contexts.
1.2. Rationale for the Study
While AI literacy has gained traction in both policy and academic circles, existing models often cater disproportionately to STEM disciplines, focusing on technical mastery rather than holistic understanding. At the same time, humanities-focused AI instruction tends to emphasize ethical and philosophical discussions while lacking practical exposure to core technologies. This creates an artificial divide that leaves students ill-equipped to navigate an AI-driven world that increasingly demands interdisciplinary fluency. Despite the abundance of literature on AI in education, few empirical studies have directly compared the AI literacy needs of STEM and humanities students using mixed-methods approaches, as established by Triplett (2023). This study aims to fill this research gap by analyzing how each group perceives AI, identifies gaps in their training, and prefers to learn about AI. The ultimate goal is to provide evidence-based insights that will inform the development of adaptable, inclusive, and discipline-sensitive AI curricula across higher education.
1.2. Literature Review
1.2.1. Defining AI Literacy
AI literacy involves a critical understanding and interpretation of technologies in multiple contexts and must be adapted to meet the needs of science, technology, engineering, and mathematics (STEM) and humanities students (Hwang et al., 2023; Stolpe & Hallström, 2024). AI education in STEM focuses on technical skills such as programming and machine learning, while in the humanities it encompasses ethical, cultural, and interpretive dimensions (Berry, 2022; Joseph & Uzondu, 2024; Yetisensoy & Rapoport, 2023). Therefore, developing comprehensive educational models requires adopting an interdisciplinary approach to ensure the development of balanced competencies in students (Mishara, 2024).
1.2.2. AI in STEM Education
Artificial intelligence (AI) is essential to STEM education, providing students with technical skills in predictive modeling, algorithms, and computer vision through tools such as Python and TensorFlow (Joseph & Uzondu, 2024; Roozafzai, 2025). Hands-on learning curricula and applied projects enhance these skills, especially when incorporating partnerships with industry to provide real-world experiences (Lee & Perret, 2022). However, integration with ethical and interdisciplinary aspects remains limited, undermining students' ability to critically reflect on the societal implications of AI.
1.2.3. AI in Humanities Education
AI has transformed the humanities through tools such as natural language processing and sentiment analysis, expanding the possibilities for research, text analysis, and artistic production (Groenewald et al., 2024; Yetisensoy & Rapoport, 2023). AI education curricula in the humanities focus on ethical and social aspects through discussions and case studies, away from direct technical training (Ouyang et al., 2023). However, the absence of programming and data science education limits students' ability to interact effectively with AI tools.
1.2.4. Challenges in AI Education Across Disciplines
A major challenge in AI education is the gap between technical education in STEM disciplines and conceptual approaches in the humanities, which hinders students' understanding of the ethical and social complexities of technology (Floridi, 2023; Mishara, 2024; Roozafzai, 2025). This divide is exacerbated by poor collaboration between AI developers and ethicists, keeping ethics on the margins of technical education, while the humanities lack practical training (Hwang et al., 2023). Therefore, adopting an interdisciplinary approach is essential to integrate ethical and technical considerations into AI education in a balanced manner.
1.2.5. Literature Gap
Despite the growing importance of AI in education, limited research has directly compared the AI literacy requirements and pedagogical strategies between STEM and humanities disciplines. Most existing studies focus exclusively on the integration of AI within STEM fields, exploring computational and algorithmic competencies (Lin, Huang, & Lu, 2023). A separate body of research examines AI literacy in the humanities, emphasizing ethics, media theory, and digital cultural analysis (Yetisensoy & Rapoport, 2023). However, these investigations often occur in silos, rarely intersecting to provide a holistic picture. There is also a lack of empirical studies that combine qualitative and quantitative data to explore how students across disciplines experience and understand AI instruction. Moreover, the absence of mixed-methods research prevents a nuanced understanding of how different learning environments, cultural contexts, and disciplinary expectations shape AI literacy development. This study seeks to address these shortcomings by offering a comparative analysis that bridges theoretical and empirical divides.
2. Materials and Methods
2.1. Research Design
This research adopts a mixed-methods design, combining quantitative and qualitative techniques to ensure a comprehensive understanding of AI literacy across disciplines. The use of a quasi-experimental survey allows for statistical comparisons between STEM and humanities students, while open-ended responses provide rich, contextual insights into individual experiences and perceptions (Alalaq, 2024). The rationale for this approach lies in the complexity of AI literacy, which cannot be fully captured through numerical data alone. By integrating both types of data, the study gains a more holistic view of the diverse ways in which students learn about and apply AI concepts. This methodology also enhances the validity of findings by triangulating evidence across multiple sources.
2.2. Population and Sampling
The purpose of the study is at the college level, with a population of 100 faculty and students in STEM (N=50) and humanities (N=50) programs at a higher education institution. To minimize selection bias, address the lack of representation of STEM participants compared to the humanities participants, and maximize study generalizability, a stratified random sampling technique was applied such that STEM and humanities participants are equally represented (Nguyen et al., 2021). A final sample size of 25 participants was determined for feasible and methodologically sound research, with balanced numbers across disciplines (STEM N=12, Humanities N=13).
2.3. Data Collection Instrument Questionnaire
The study utilized a structured questionnaire containing three quantitative and four qualitative questions to capture statistical trends and participant perspectives.
✓ Quantitative Questions (Closed-ended)
* Rate the importance of AI literacy in your field
(1 = Not Important, 5 = Extremely Important).
* What is your preferred AI teaching method?
(a) Lectures, (b) Hands-on learning, (c) Discussions.
* How confident are you in your AI-related skills?
(Scale: 1-10).
✓ Qualitative Questions (Open-ended)
* What are the key challenges in developing AI literacy in your field?
* How should AI ethics be integrated into AI education?
* What resources would best support AI learning in your field?
* How can interdisciplinary collaboration enhance AI literacy?
2.4. Data Analysis
Descriptive and inferential statistical analyses were used for the quantitative data, while qualitative responses were subject to thematic coding. Responses for the quantitative data were analyzed using mean, variance, and standard deviation (SD) to summarize participants' perceptions of AI literacy, confidence level, and preferred teaching method. Statistically significant differences were compared between STEM and humanities using independent t-tests regarding the difference in AI literacy needs and teaching preferences (Nguyen et al., 2021). Moreover, for the qualitative data, responses to open-ended questions were analyzed using thematic coding, one of the most commonly used qualitative methods, including identifying patterns and recurring themes in participant stories. These responses were organized into themes of key areas like AI literacy challenges, ethics integration, ideal learning resources, and interdisciplinary collaboration (Anand et al., 2024). Combining quantitative trends with qualitative perspectives makes this an effective method for gaining a holistic understanding of AI literacy needs beyond STEM into humanities.
3. Results
Question 1: How important do you consider AI literacy in your field?
With the standard deviation measured at 1.59, based on AI literacy rating (Table 1), both moderate and low variations were seen in participants' preferences for AI learning methods (Table 2), with some preferring hands-on learning, while others prefer structured lectures (Contrino et al., 2024; Huda & Moh, 2022)). This variation is attributed to the technical background of the participants and the prevalence of self-paced online learning as a flexible and accessible option (Chan, 2023). This variation also reflects the importance of adapting AI teaching approaches to the discipline, with technical disciplines leaning toward practical workshops, while theoretical disciplines prefer conversational approaches.
Question 2: Which of the followin AI learnin methods do ou refer the most?
Likewise, with the standard deviation measured at 0.77, it indicates low variance in AI learning method preferences, with most respondents preferring hands-on learning (Table 2) (Seo et al., 2021). This variance is due to differences in AI education policies across institutions, leading to varying AI awareness among professionals (Chan, 2023). The low standard deviation also indicates that most professions have not widely adopted AI literacy, calling for education to be tailored to each student's professional context (Su, Ng, & Chu, 2023).
Question 3: How confident are you in your current AI-related skills?
With the standard deviation measured at 1.97 basedon Table 3 results, it indicates moderate to high variability in participants' confidence levels in their AI skills, reflecting differences in AI literacy. STEM students have higher confidence due to formal training, while others struggle due to a lack of guided instruction or limited self-paced learning (Lin et al., 2023). The rapid development of AI also impacts the confidence of professionals, even those with prior training (Seo et al., 2021). These findings highlight the need to design flexible and adaptive educational programs that cater to different learner levels (Contrino et al., 2024).
Question 4: What are the biggest challenges in improving AI literacy within your profession?
3.1. General Observations
To improve AI literacy in professional fields, there are four themes of challenge: lack of institutional support, resistance to AI, financial and resource limits, and the fast-changing nature of AI technology. For instance, studies show that AI literacy is universal for almost all areas, from STEM to non-STEM fields, yet many institutions do not provide structured AI education. Additionally, professionals within the traditional industry question AI's significance and thus become resistant to transferring their knowledge (Hwang et al., 2020). Another important barrier is financial constraints; AI courses and certifications are expensive (Kuleto et al., 2021). Thirdly, the speed of revolution in AI technology means that learners can hardly keep up and thus need continuous education strategies (Dimitriadou & Lanitis, 2023).
3.2. Respondents Feedback
* Respondent 5 stated, "There is very little institutional encouragement in AI training, and the professionals have to fight for AI skill development without structured programs." The response backs up research that many organizations do not recognize AI literacy as a priority and henceforth have scattered educational experiences (Mishara, 2024). Suppose institutions underestimate the importance of AI training in professional development programs. In that case, they force their employees to learn in the dark, often with a fragmented understanding of applying AI to concrete problems (Holitschke, 2023). To tackle this challenge, one must provide structured AI education for workplaces and within universities.
* Respondent 8 mentioned, "Senior professionals in my industry believe AI is only for technical experts, so they resist AI training." This also indicates how risk-averse employees are to adopting AI, especially practitioners not exposed to what AI can offer (Celik et al., 2022). Many in the non-technical world reason that AI only refers to IT and engineering, and as such, they are discouraged from participating in AI literacy programs. Based on the research, AI training focuses on applications related to industry to increase the engagement of different disciplines (Chen, Chen, & Lin, 2020).
* Respondent 14 explained, "The cost of AI courses and certification programs is too high, making it difficult for professionals without employer sponsorship to access AI education." Barriers to literacy for AI are well documented, such as the incentive to develop AI literacy through its use without the necessary financial support, particularly in developing regions (Kuleto et al., 2021). It is too expensive for many professionals to learn the cost of AI training materials and courses. To help this, AI training with subsidies and open learning resource access should be encouraged.
* Respondent 22 observed, "AI evolves so rapidly that by the time I complete an AI course, new advancements render parts of my knowledge outdated." This is a good statement that portrays the challenge of trying to stay on pace with the rapid development of AI (Dimitriadou & Lanitis, 2023). Continuous learning is required for many AI models and frameworks that become obsolete within a few years. It helps professionals keep themselves updated with the latest AI trends through adaptive AI curricula that update in real time.
Question 5: What is the best way to integrate AI ethics into education and professional training?
3.3. General Observations
According to the feedback, four strategies are viable for incorporating AI ethics into the curricula: embedding AI ethics into all AI-related courses, incorporating experience learning, facilitating interdisciplinary collaboration, and including real-world case studies. From research, considerations about ethics in the context of AI should not be left out at the end but an important part of AI literacy. Simulations and case studies are hands-on learning experiences that can make AI ethics more relatable (Hwang et al., 2020). Technical and non-technical fields of collaboration are working to solve the issues as much as possible (Mishara, 2024). Moreover, in real-world case studies that illustrate the pros and cons of AI, we help gain ethical awareness (Chen et al., 2020).
3.4. Respondents Feedback
* Respondent 7 suggested, "AI ethics should be embedded into all AI-related courses rather than treated as an optional subject." This statement corroborates research that underscores the need to incorporate AI ethics into mandated AI education. If ethics is a unique course, students consider it secondary to AI technical concepts. Having ethical issues embedded in AI courses will ensure that ethical considerations remain at the top of mind.
* Respondent 10 noted, "AI ethics should be taught through hands-on projects where students analyze real-life ethical dilemmas." Research supports experiential learning through role-playing exercises, ethical AI simulations, etc., and improves learning (Celik et al., 2022). Students better understand the ethical challenges when engaging with AI ethics around practical applications. Moreover, adopting this approach aligns with the AI education pedagogy based on problem-based learning (PBL) (Kuleto et al., 2021).
* Respondent 18 stated, "Bringing together philosophy, law, and technology experts ensures that AI ethics training is comprehensive." The case, therefore, confirms the need for interdisciplinary collaboration in educational programs in AI ethics (Mishara, 2024). The impact of AI applications reaches across several fields, and the implications of ethics go beyond technical issues. Ethical training is more holistic and practical when ethicists, legal scholars, and AI engineers are involved.
* Respondent 24 mentioned, "Real-world AI failures, such as biased hiring algorithms, should be studied to understand ethical risks. Chen et al. (2020) provide case studies on how ethical issues manifest in the setting of AI. By understanding what happened in past AI failures, students and professionals can see potential biases, privacy, and accountability issues. This approach to corporate codes of conduct enhances ethical awareness by situating abstract principles about actual consequences.
Question 6: What types of resources would help you learn AI more effectively?
3.5. General Observations
AI learning resources depend on needs and professional backgrounds. Four primary resources can help improve AI literacy: online courses, mentorship, hands-on projects, and academic research. Through online courses, learners can build AI knowledge at their own pace.
3.6. Respondents Feedback
* Respondent 5 stated, "Self-paced online courses are the most convenient way for working professionals to learn AI." This supports research that suggests that online courses offer accessibility and flexibility that will continue to benefit working individuals as they juggle work and education. The AI course is available in different competitions on edX and Coursera and can adapt to everyone's level (beginner, intermediate, or advanced). However, some learners may struggle to contain themselves enough for self-paced learning.
* Respondent 8 mentioned, "Hands-on projects help me apply AI concepts in real-world scenarios, making learning more effective." This statement can lead to the idea that experiential learning can upgrade the education of AI by strengthening the ability to practice (Hwang et al., 2020). Hands-on projects also teach critical thinking and problem-solving, critical to any AI-related career. AI knowledge is abstract and impractical unless there is a context for the different applications.
* Respondent 14 explained, "Having a mentor in AI would guide complex topics that are difficult to understand from books alone." Although there is a considerable scope of research on mentorship as a learning tool, mentorship is considered a valuable learning tool, especially for early AI learners who need personalized guidance (Celik et al., 2022). Mentorship programs link learners with experienced people who share their knowledge and help the learner develop their careers. However, in many AI education settings, qualified mentors for these students are hard to come by.
* Respondent 22 observed, "Access to AI research papers and academic journals helps me stay updated on emerging AI technologies." This suggests that the learning of AI is research-driven (Chen et al., 2020). You learn about AI happenings by reading peer-reviewed journals or presenting papers at conferences, and they are most valuable for advanced learners. Unfortunately, many high-quality AI research papers are behind paywalls and, hence, inaccessible to independent learners.
Question 7: How can interdisciplinary collaboration enhance AI education and application in your profession?
3.7. General Observations
The contribution of interdisciplinary collaboration to the education of AI has been to bring different perspectives of different fields. Interdisciplinary collaboration is vital for AI and the non-technical fields because it bridges the gap, nurtures innovation and ingenuity, addresses ethical considerations, and provides improved problem-solving approaches. No longer shall it be restricted to the STEM fields; now, branches of medicine, business, social sciences, and humanities are also reaping the benefits of AI. They support the development of AI solutions (Hwang et al., 2020) that are pragmatic and ethically sound by providing collaborative learning environments for experts from different disciplines to bring together knowledge to co-develop such solutions.
3.8. Respondents Feedback
* Respondent 4 stated, "Collaborating with experts from other fields helps ensure that AI tools are designed for real-world applications." Research by Mishara (2024) confirms how interdisciplinary teamwork enhances the industry suitability of AI systems (Mishara, 2024). AI engineers partnering with healthcare professionals, educators, and economists enable them to develop AI models that fulfill specialized industry requirements. Systems based on artificial intelligence lose their practical utility when professionals from different fields do not contribute to their development.
* Respondent 16 added, "Interdisciplinary collaboration allows us to explore AI's ethical implications more effectively." The increasing problem of AI ethics can be resolved by incorporating social scientists,
ethicists, and policymakers during AI development to achieve responsible AI technology (Dimitriadou
& Lanitis, 2023). Systematic ethical AI depends on various viewpoints because it prevents scandals involving prejudice and discrimination alongside privacy breaches. The exchange of ideas between
AI experts and ethical professionals results in advanced ethical principles for educational AI practices.
* Respondent 20 noted, "Bringing together professionals from different disciplines leads to innovative AI applications that would not emerge in isolated environments." Numerous research papers, including
(Kuleto et al., 2021), have demonstrated that interdisciplinary AI projects achieve better creativity and innovation results. AI applications have progressed in digital humanities, legal technology, and climate science because technical teams work with non-technical personnel.
* Respondent 25 observed, "Workshops and interdisciplinary research projects help professionals understand how AI applies to their respective fields." Findings published by Celik et al. (2022) demonstrate that
AI educational programs benefit from collaborative workshop methods. AI offers members outside the technical sector better access to learning interactions, which promote knowledge transmission.
4. DISCUSSION AND CONCLUSION
4.1. Key Findings
This study revealed substantial differences between STEM and humanities students in AI literacy needs, learning preferences, and perceived institutional support. The STEM cohort emphasized technical competencies such as programming and algorithm development, while the humanities group focused on ethical concerns, conceptual frameworks, and the societal implications of AI, as noted by Tasioulas (2021). These findings reinforce the argument that a uniform approach to AI education is inadequate.
4.1.1. Disciplinary Differences in AI Literacy Needs
The findings align with prior research by Yetisensoy and Rapoport (2023), which highlights the distinct priorities of STEM and humanities students. STEM students perceive AI as a tool for innovation and automation, requiring hands-on experience with algorithms, data analysis, and system development. Conversely, humanities students approach AI through the lens of critical inquiry and ethical implications. This distinction points to the need for AI curricula tailored to specific disciplinary goals while maintaining a shared foundation of ethical and social responsibility.
4.1.2. Variability in AI Learning Preferences
The data showed that while hands-on learning was favored overall, humanities students preferred discussion-based learning and real-world case studies. These preferences echo findings from Van Brummelen, Heng and Tabunshchyk (2021), who advocate for active, contextually grounded AI education in non-technical fields. The high standard deviation in AI confidence levels further demonstrates that many students-especially those from non-STEM backgrounds-lack structured exposure to foundational AI concepts. This calls for differentiated instruction models and adaptive pathways that scaffold AI literacy for learners at varying levels of proficiency.
4.1.3. The Role of Policy Incentives in AI Literacy Integration
This study underscores the importance of institutional and governmental policy in shaping effective AI education. Institutions that embed AI education as a core requirement and offer faculty training for both technical and ethical dimensions are more likely to produce AI-literate graduates. A per prior literature, our findings support the integration of policy mechanisms that incentivize interdisciplinary course offerings, cross-departmental collaboration, and publicly funded open-access resources.
4.1.4. Interdisciplinary Approaches to AI Literacy
A recurring theme in the qualitative data was the value of interdisciplinary collaboration. Respondents noted that co-teaching models and interdisciplinary workshops enriched their understanding of AI's broader implications. This aligns with Walter (2024), who argues that AI literacy must include both ethical foresight and technical fluency. The study confirms that joint initiatives between STEM and humanities departments can lead to richer, more inclusive AI curricula that reflect the multifaceted nature of AI's impact (Chapinal- Heras & Díaz-Sánchez, 2023).
4.2. Implications for Education
Educators must recognize that AI literacy is not a one-size-fits-all endeavor. For STEM disciplines, pedagogical strategies should emphasize practical applications-coding, simulation, and algorithmic design- while incorporating modules on AI ethics and social accountability. For humanities, instructors should ground AI concepts in real-world implications, ethical dilemmas, and interdisciplinary research projects.
4.2.1. STEM: Focus on Technical Applications
While AI education should focus on technical applications and coding in the case of STEM disciplines, statistics should play a significant role in the case of economics. Project-based learning, simulations, and AI programming that can apply machine learning models and automation tools in real-world scenarios greatly benefit STEM students (Xu & Ouyang, 2022). Research indicates that including AI in STEM explorations aids students in overcoming complex computational problems and enhances their skills in preparing for future careers (Roozafzai, 2025). In addition to this, as part of AI literacy, future AI practitioners should be trained in data ethics, model transparency, and algorithmic accountability in order to understand the ethical aspects of AI and also to understand how to work with and utilize AI responsibly and ethically (Stolpe & Hallström, 2024).
4.2.2. Humanities: Integrate Ethical and Societal Impacts
An approach to AI literacy via humanities disciplines is used where this is done through a theoretical and ethical lens with implications for the wider society (Yetisensoy & Rapoport, 2023). Case studies, ethical frameworks, and how (if at all) AI should impact humanity should also be part of humanistic AI literacy (Lourdu Vesna et al., 2025). This way, students can gain critical perspectives on AI policy, digital privacy,
and AI-driven decision-making (Hutson et al., 2022). Instead, students can acquire AI literacy through social sciences, law, media studies, and education. This will better grasp how AI productively interacts with questions on bias, misinformation, and labor market disruption (Roozafzai, 2025). In addition, humanities and STEM scholars can collaborate to produce discussions of AI for interdisciplinary AI that promote balanced humanities and STEM perspectives in AI education (Xu & Ouyang, 2022).
4.3. Limitations
Despite its insights, this study has limitations. The small sample size limits generalizability. Self-reported data introduces the possibility of bias, and the disciplinary categorization did not fully account for sub-field nuances (e.g., philosophy vs. media studies within humanities). Future research should expand the sample size, use objective skill assessments, and explore sub-disciplinary distinctions.
4.3.1. Small Sample Size
Due to a small sample size (n=25), the study's findings are not generalizable. The goal was to use the small sample to draw preliminary conclusions regarding how AI literacy varies across disciplines. A more significant, more robustly sampled population could examine statistically robust conclusions about AI literacy variations across disciplines (Roozafzai, 2025). Sample sizes should be expanded in future studies to include students, educators, and professionals from various fields to provide a more detailed analysis (Lourdu Vesna et al., 2025).
4.3.2. Self-Reported Bias
The data used in the study is self-reported, which can be a source of bias in AI confidence levels and perceptions of AI literacy (Hooda et al., 2022). Individuals might have overor underestimated their AI proficiency, making the findings unreliable (as processed by Hutson et al., 2022). To alleviate this limitation, future research should also include objective AI proficiency assessments and performance-based evaluations (Stolpe & Hallström, 2024).
4.3.3. Disciplinary Representation
Even though the study's sample included STEM and humanities participants, more balance between subfields in each discipline would be desirable (Xu & Ouyang, 2022). This could be further explored in future research to examine AI literacy differences within different subspecialties of STEM (for example, engineering versus biology subspecialties) and humanities subspecialties (for example, philosophy versus media studies subspecialties) so that particular domain needs can be better understood (Roozafzai, 2025).
4.4. Recommendations
4.4.1. Interdisciplinary Workshops
Implement cross-disciplinary workshops to foster collaborative learning. Interdisciplinary AI workshops should be organized to promote teamwork between STEM and humanities students. From a pedagogical standpoint, these workshops can provide insights into the ethics of AI to STEM students and technical aspects of AI tools and applications to humanities students (Xu & Ouyang, 2022). Representing the feelings of students who find it challenging to grasp the complete understanding required for the management of artificial intelligence without venturing into non-technical understanding, universities can fill the gap between the needs of AI literacy among students by creating cross-disciplinary learning environments.
4.4.2. Policy Incentives for AI Literacy Integration
Advocate for institutional and national policy changes to support comprehensive AI literacy programs.
Governments and academic institutions should introduce policy incentives such as AI literacy as a core part of higher education (Roozafzai, 2025). Therefore, it is necessary to adopt that require AI ethics training for STEM students and technical AI modules for humanities students to form a balanced AI education framework (Alamäki et al., 2024). Additionally, funding should be given to develop open-access AI resources, faculty development initiatives, and AI training programs so everyone can adopt broad AI literacy (Stolpe & Hallström, 2024).
4.4.3. Customized AI Learning Paths
Design modular, adaptable AI education paths that reflect the unique needs of each discipline. Universities should be more adaptive in how they approach AI literacy for their students because a one-size-fits-all program does not work (Hutson et al., 2022). These pathways could include technical AI courses for STEM students on programming, machine learning, and applications in data science (Ogunkunle & Qu, 2020). For humanities students, AI policy and ethics courses could also be included (i.e., AI regulation, bias mitigation, digital rights) (Roozafzai, 2025). Additionally, interdisciplinary AI courses are where one learns alongside technical and ethical AI domains while actively participating in a team solving a problem.
4.5. Conclusion
This study contributes to the discourse on AI literacy by highlighting the divergent needs and preferences of STEM and humanities students. It fills a critical gap in literature by using empirical methods to explore how discipline-specific perspectives shape engagement with AI. To build a workforce and citizenry equipped for an AI-driven world, educational systems must evolve toward inclusive, interdisciplinary, and flexible AI literacy frameworks. Additionally, this research illustrates significant differences in the individual needs of AI literacy and the desires for how to teach it between STEM and humanities disciplines. Results show that STEM fields prefer to apply technical AI, and humanities disciplines generally prefer AI for ethical and societal implications. Furthermore, distinct preferred AI learning methods indicate the need for adaptive, discipline-specific AI education models. Concerning education, the repercussions suggest the need for AI literacy to be impregnable in higher education curricula, for STEM students to acquire AI technical skills, and for humanities students to deliberate on AI ethics and policy. Besides, the study also highlights the necessity for policy-driven reforms in AI education via AI literacy government initiatives on the cross-discipline. These efforts require an interdisciplinary approach to build a broad framework of education in AI that leads to creating such AI applications in a responsible and ethically relevant way. This study, if anything, provides valuable help regarding variations in AI literacy, even though small sample sizes and self-reported biases may distort answers too much.
Support/Funding Statement
Funding: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Sidebar
References
References
Alalaq, A. (2024). Inteligência Artificial e sua Relação com as Humanidades. SciELO Preprints. https://doi.org/10.1590/SciELOPreprints.10389
Alamäki, A., Nyberg, C., Kimberley, A., & Salonen, A. O. (2024). Artificial intelligence literacy in sustainable development: A learning experiment in higher education. Frontiers in Education, 9, 1343406. https://doi.org/10.3389/feduc.2024.1343406
Anand, T., Chatrath, S. K., Ramachandran, J., Barra, G., Dale, N. F., Tze, H. K., et al. (2024). Antecedents of online learning effectiveness and its impact on educational ethics: A comparative study between India and Malaysia. Journal of Applied Structural Equation Modeling, 8(2), 1-26. https://doi.org/10.47263/JASEM.8(2)02
Berry, D. (2022). Ai, ethics, and digital humanities (Vol. 1). University of Sussex. https://hdl.handle.net/10779/uos.23309129.vl
Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research. TechTrends, 66(4), 616-630. https://doi.org/10.1007/s11528-022-00715-y
Chan, C. K. Y. (2023). A comprehensive Al policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 38. https://doi.org/10.1186/s41239-023-00408-3
Chapinal-Heras, D., & Draz-Sánchez, C. (2023). A review of Al applications in Human Sciences research. Digital Applications in Archaeology and Cultural Heritage, 30, e00288. https://doi.org/10.1016/j.daach.2023.e00288
Chen, L., Chen, P., & Lin, Z. (2020). Artificial Intelligence in Education: A Review. IEEE Access, 8, 75264-75278. https://doi.org/10.1109/ACCESS 2020.2988510
Contrino, M. F., Reyes-Millán, M., Vázquez-Villegas, P., & Membrillo-Hernández, J. (2024). Using an adaptive learning tool to improve student performance and satisfaction in online and face-to-face education for a more personalized approach. Smart Learning Environments, II(1), 6. https://doi.org/10.1186/s40561-024-00292-y
Dimitriadou, E., & Lanitis, A. (2023). A critical evaluation, challenges, and future perspectives of using artificial intelligence and emerging technologies in smart classrooms. Smart Learning Environments, 10(1), 12. hups://doi.org/10.1186/s40561-023-00231-3
Floridi, L. (2023). The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford University Press. https://doi.org/10.1093/oso/9780198883098.001.0001
Groenewald, E. S., Pallavi, P., Rani, S., Singla, P., Howard, E., & Groenewald, C. A. (2024). Artificial Intelligence in Linguistics Research: Applications in Language Acquisition and Analysis. Naturalista Campano, 28(1), 1253-1259. https://www.museonaturalistico.it/index.php/journal/article/view/239
Holitschke, S. (2023, August 30). The Role of the Arts and Humanities in Thinking About Artificial Intelligence (AI). Linkedin. https://www.linkedin.com/pulse/role-arts-humanities-thinking-artificial-intelligence-holitschke
Hooda, M., Rana, C., Dahiya, O., Rizwan, A., & Hossain, M. S. (2022). Artificial Intelligence for Assessment and Feedback to Enhance Student Success in Higher Education. Mathematical Problems in Engineering, 2022(1), 5215722. https://doi.org/10.1155/2022/5215722
Huda, S. M. A., & Moh, S. (2022). Survey on computation offloading in UAV-Enabled mobile edge computing. Journal of Network and Computer Applications, 201, 103341. https://doi.org/10.1016/j.jnca.2022.103341
Hutson, J., Jeevanjee, T., Vander Graaf, V., Lively, J., Weber, J., Weir, G., et al. (2022). Artificial Intelligence and the Disruption of Higher Education: Strategies for Integrations across Disciplines. Creative Education, 13(12), 3953.3980. https://doi.org/10.4236/ce.2022.1312253
Hwang, G.-J., Xie, H., Wah, B. W., & Gašević, D. (2020). Vision, challenges, roles and research issues of Artificial Intelligence in Education. Computers and Education: Artificial Intelligence, 1, 100001. https://doi.org/10.1016/j.caeai.2020.100001
Hwang, H., Allen, L., Stagnaro, K., & Kendeou, P. (2023). Public Discourse on the Science of Reading (sor). PsyArXiv. https://doi.org/10.31234/osf.io/yqt78
Joseph, O. B., & Uzondu, N. C. (2024). Integrating Al and Machine Learning in STEM education: Challenges and opportunities. Computer Science & IT Research Journal, 5(8), 1732-1750. https://doi.org/10.51594/csitrj.v5i8.1379
Kuleto, V.,Ilić, M., Dumangiu, M., Ranković, M., Martins, O. M. D., Păun, D., et al. (2021). Exploring Opportunities and Challenges of Artificial Intelligence and Machine Learning in Higher Education Institutions. Sustainability, 13(18), 10424. https://doi.org/10.3390/sul31810424
Lee, I., & Perret, B. (2022). Preparing High School Teachers to Integrate Al Methods into STEM Classrooms. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12783-12791. https://doi.org/10.1609/aaai.v36i11.21557
Lin, C.-C., Huang, A. Y. Q., & Lu, O. H. T. (2023). Artificial intelligence in intelligent tutoring systems toward sustainable education: a systematic review. Smart Learning Environments, 10(1), 41. https://doi.org/10.1186/s40561-023-00260-y
Lourdu Vesna, P. S. S., Kaul, P., Pal, S., & Murthy, B. S. R. (2025). Digital Divide in Al-Powered Education: Challenges and Solutions for Equitable Learning. Journal of Information Systems Engineering and Management, 10(21s), 300-308. hups://doi.org/10.52783/jisem.v10121s.3327
Mishara, P. (2024). The Ethical Implications of Al in Education: Privacy, Bias, and Accountability. Journal of Informatics Education and Research, 4(2), 3550-3556. https://doi.org/10.52783/jier.v412.1827
Nguyen, T., Novak, R., Xiao, L., & Lee, J. (2021). Dataset Distillation with Infinitely Wide Convolutional Networks. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, & H. Lin (Eds.), Advances in Neural Information Processing Systems 34 (NeurIPS 2021) (pp. 5186-5198). Curran Associates, Inc. https://proceedings.neurips.cc/paper/2021/file/299a23a2291e2126b91d54f3601ec162-Paper.pdf
Ogunkunle, O., & Qu, Y. (2020). A Data Mining based Optimization of Selecting Learning Material in an Intelligent Tutoring System for Advancing STEM Education. In 2020 International Conference on Computational Science and Computational Intelligence (CSCI) (pp. 904-909). IEEE. https://doi.org/10.1109/CSC151800.2020.00169
Ouyang, F., Wu, M., Zheng, L., Zhang, L., & Jiao, P. (2023). Integration of artificial intelligence performance prediction and learning analytics to improve student learning in online engineering course. International Journal of Educational Technology in Higher Education, 20(1), 4. https://doi.org/10.1186/s41239-022-00372-4
Roczafzai, Z. S. (2025). Bridging the Gap Between Digital and Applied Humanities with Al: A Phenomenological Inquiry. In The 1st International Conference on Artificial Intelligence in the Era of Digital Transformation. https://www.researchgate.net/publication/386135159
Seo, K., Tang, J., Roll, I., Fels, S., & Yoon, D. (2021). The impact of artificial intelligence on learner-instructor interaction in online learning. International Journal of Educational Technology in Higher Education, 18(1), 54. https://doi.org/10.1186/s41239-021-00292-9
Stolpe, K., & Hallström, J. (2024). Artificial intelligence literacy for technology education. Computers and Education Open, 6, 100159. https://doi.org/10.1016/j.caeo.2024.100159
Su, J., Ng, D. T. K., & Chu, S. K. W. (2023). Artificial Intelligence (AI) Literacy in Early Childhood Education: The Challenges and Opportunities. Computers and Education: Artificial Intelligence, 4, 100124. https://doi.org/10.1016/j.caeai.2023.100124
Tasioulas, J. (2021, June 14). The role of the arts and humanities in thinking about artificial intelligence (AI). Ada Lovelace Institute. https://www.adalovelaceinstitute.org/blog/role-arts-humanities-thinking-artificial-intelligence-ai
Triplett, W. J. (2023). Artificial intelligence in STEM education. Cybersecurity and Innovative Technology Journal, 1(1), 23-29. https://doi.org/10.53889/citj.vlil.296
Van Brummelen, J., Heng, T., & Tabunshchyk, V. (2021). Teaching Tech to Talk: K-12 Conversational Artificial Intelligence Literacy Curriculum and Development Tools. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 15655-15663. https://doi.org/10.1609/aaai.v35117.17844
Walter, Y. (2024). Embracing the future of Artificial Intelligence in the classroom: the relevance of Al literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1), 15. https://doi.org/10.1186/s41239-024-00448-3
Wu, S.H., Lai, C.-L., Hwang, G.-J., & Tsai, C.-C. (2021). Research Trends in Technology-Enhanced Chemistry Learning: A Review of Comparative Research from 2010 to 2019. Journal of Science Education and Technology, 30(4), 496-510. https://doi.org/10.1007/s10956-020-09894-w
Xu, W., & Ouyang, F. (2022). The application of Al technologies in STEM education: a systematic review from 2011 to 2021. International Journal of STEM Education, 9(1), 59. https://doi.org/10.1186/s40594-022-00377-5
Yetisensoy, O., & Rapoport, A. (2023). Artificial Intelligence Literacy Teaching in Social Studies Education. Journal of Pedagogical Research, 7(3), 100-110. https://doi.org/10.33902/JPR 202320866