Content area

Abstract

Although Artificial Intelligence (AI) is transforming teachers’ knowledge and professional practice, its full potential has yet to be fully realized. To incorporate AI effectively into pedagogical contexts, it is essential that teachers possess the knowledge necessary to guide its responsible use. However, in Latin America, there remains limited empirical evidence to support this process. To address this gap, this empirical study analyzes teachers’ knowledge of AI using the Intelligent-TPACK framework, which includes an ethical dimension. A validated and adapted questionnaire was administered to 709 primary and secondary school teachers from the Metropolitan Region of Chile, using a non-probability sampling method. The sample is compositional–descriptive in nature for the study variables and is not statistically representative of the broader population. Data were analyzed through descriptive and inferential statistical methods. The results reveal mixed levels of knowledge—slightly higher in technological knowledge yet lower in terms of integration and ethical awareness. Significant differences were found by gender, age, teaching level, and subject area. Regression models identified teaching experience, gender, and educational level as the most consistent predictors. Additionally, cluster analysis revealed four exploratory professional profiles characterized by varying degrees of knowledge. These findings are discussed in light of teacher training needs and aim to inform the development of professional learning programs better aligned with the actual demands of the teaching profession.

Full text

Turn on search term navigation

1. Introduction

Artificial Intelligence (AI) has forcefully entered the global conversation. Between 2022 and 2023 alone, human interactions with AI systems increased by more than 400% (Maslej et al., 2024). In response to this situation, governments have expressed growing concern about preparing citizens to face with confidence and responsibility a future where AI will be increasingly present (Berryhill et al., 2019; Lorenz et al., 2023; Cazzaniga et al., 2024). This concern is also reflected in the educational field, where teachers and students are already using various AI tools such as natural language processing, intelligent agents, computer vision, adaptive learning, data mining, and speech recognition, among others (Holmes & Porayska-Pomsta, 2023; Miao et al., 2021; Yim & Su, 2025; Williamson & Eynon, 2020; OECD, 2023). For teaching practice, the use of AI tools goes beyond generative AI or large language models such as ChatGPT.

Recent reviews show that some natural language processing tools allow teachers to save time on routine management and planning tasks, such as rubric design, diversification of assessment questions, automatic grading, and the preparation of new teaching materials. This enables teachers to dedicate more time to other activities such as feedback and assessment (Grassini, 2023; Labadze et al., 2023; Yan et al., 2024; Celik, 2023; Zawacki-Richter et al., 2019). These tools can also be leveraged during lessons to answer students’ questions and provide complementary explanations to those of the teacher, fostering peer interaction, conversation, and exchange of ideas, and thereby promoting more collaborative learning environments (Adel et al., 2024; Labadze et al., 2023; Lo, 2023).

Other reviews have examined the use of computer vision tools, bringing out their usefulness in facilitating real-time monitoring of classroom activities, as well as in analyzing students’ facial expressions to identify emotions and estimate levels of engagement during different moments of a lesson (Dimitriadou & Lanitis, 2023; Anwar et al., 2023). It has also been observed that some online learning platforms, combined with adaptive systems or technologies that use data mining, give teachers the ability to personalize instruction based on each student’s needs and pace. This feature helps teachers broaden their assessment strategies and act promptly to foresee students’ low academic performance (Dimitriadou & Lanitis, 2023; M. E. Dogan et al., 2023).

Given its potential, integrating AI tools into Education (AIED) offers a promising opportunity for professional teaching practice. It allows for better time management, broader access to knowledge, easier dissemination of information, and the promotion of more effective and personalized learning tailored to each individual. This has positioned AI as a key driver for advancing the future of education (Miao et al., 2023; UNESCO, 2019; Miao et al., 2021, 2022; Zhang & Aslan, 2021). Maximizing the benefits of AIED also involves engaging with the various aspects of teaching work. Usually, teachers gain more from AI when their professional development aligns with pedagogical goals and practical needs (S. Dogan et al., 2025). In this regard, beyond just the technological elements, AIED requires rethinking new scenarios to include AI in different parts of schooling such as management, classroom environment, teaching strategies, curriculum, assessment, educational needs, and the teacher’s professional role itself (Stolpe & Hallström, 2024).

The tasks teachers undertake both inside and outside the classroom are complex and involve various types of pedagogical, content, and technological knowledge specifically related to these areas (Shulman, 1986; Mishra & Koehler, 2006; Koehler & Mishra, 2008; Mishra, 2019; Mishra et al., 2023; Ning et al., 2024). Several studies suggest that teachers with high technological competence in AI are better equipped to select appropriate technological tools for educational purposes. This allows them, for example, to personalize instruction and provide timely feedback (Edwards et al., 2018; Popenici & Kerr, 2017). Conversely, teachers lacking these skills often fail to fully utilize the pedagogical opportunities that these tools offer (Joo et al., 2018). Thus, a strong pedagogical and didactic foundation in AI can enable teachers to improve teaching methods, increase motivation, and boost students’ academic achievement (Alé et al., 2025; Alé & Arancibia, 2025; Cavalcanti et al., 2021; Y. Wang et al., 2021). AI will not replace teachers’ work, but it has the potential to transform many areas of professional practice (Seufert et al., 2021). Therefore, for teachers to creatively and effectively utilize AI in their teaching and achieve successful educational integration, they must combine content, pedagogical, and technological knowledge (Celik, 2023; Mishra et al., 2023).

On the other hand, although AIED provides valuable opportunities for teaching and learning, it also introduces significant ethical challenges for education (Holmes et al., 2022; Stolpe & Hallström, 2024). Issues such as data privacy, algorithmic bias, discrimination, fairness, automation, access democratization, and respect for human rights are central topics in current debates (Almusharraf & Alotaibi, 2022; Bulathwela et al., 2024; Shum & Luckin, 2019; Kitto & Knight, 2019; European Commission, 2020; Williamson & Eynon, 2020; OECD, 2023). Some of these ethical concerns stem from a lack of transparency among technology developers, which reduces perceptions of fairness in these systems and, consequently, affects teachers’ and students’ trust in their use (Shin & Park, 2019).

In summary, AIED is transforming professional knowledge, with teachers’ technological expertise becoming just as important as their ability to communicate and integrate pedagogical, disciplinary, and ethical knowledge. This combination of skills can help ensure the responsible and safe use of AI.

Despite the importance of the teaching role in the processes of AI integration in education (OECD, 2023), little is still known about teachers’ knowledge of the subject (Sun et al., 2023; Kim et al., 2022; Luckin et al., 2022; Tan et al., 2024), and there is limited empirical evidence linking such knowledge to the AI ethical aspects involved (Celik, 2023; Holmes et al., 2022). This lack of evidence is reflected in the gap between the AI technology training provided by educational institutions and the actual needs expressed by teachers (Cukurova et al., 2024; Chiu & Chai, 2020; Ng et al., 2023; Tan et al., 2024; Zawacki-Richter et al., 2019).

In Latin American research, this topic is still emerging, unlike in countries such as China, the United States, and the United Kingdom (Maslej et al., 2024). Studies in Latin America have largely neglected the use of AI tools and ethics within the framework of technological, pedagogical, and content knowledge (TPACK), as seen in works by Sierra et al. (2024) and Kadluba et al. (2024). When they do address it, the focus is often limited to specific contexts (e.g., Castro et al., 2025). Additionally, there is limited evidence regarding teachers’ professional knowledge about using AI tools in their practice. This is especially true in Chile, where it remains unclear how teachers utilize AI’s potential in teaching or whether they are aware of how to manage it ethically. Thus, there is a need for studies that explore this knowledge and help tailor training and professional development to each specific context. The study presented in this article, to our knowledge, is the first large-scale evaluation of teachers’ AI-related technological, pedagogical, content and ethical knowledge in Chile.

Considering this background, the objective of the study presented in this article was to analyze trends in a large sample of teachers regarding technological, pedagogical, content, and ethical knowledge related to the use of AI tools in their professional practice.

2. Conceptual Framework

We refer below to the relationship between the main concepts that are part of the theoretical framework of the study, as introduced in the preceding sections: the Technological, Pedagogical, and Content Knowledge Framework (TPACK) and Intelligent-TPACK, one of its most recent adaptations.

2.1. TPACK Framework

The Technological, Pedagogical, and Content Knowledge (TPACK) framework was conceptualized by Mishra and Koehler (2006) based on Shulman’s (1986, 1987) Pedagogical Content Knowledge (PCK) conceptual framework. PCK refers to the knowledge to transform disciplinary content into forms that are comprehensible and teachable to students through appropriate pedagogical strategies.

At its core, TPACK (see Figure 1) relates and integrates these three types of knowledge (PK, CK, and TK), while recognizing that each of them is specific and distinguishable from the others (Mishra & Koehler, 2006; Mishra, 2019).

The TPACK framework identifies new types of specific knowledge related to technology, pedagogy, and content. These are Technological Content Knowledge (TCK), Technological Pedagogical Knowledge (TPK), and Technological Pedagogical Content Knowledge (TPACK).

The Technological Content Knowledge (TCK) refers to the understanding that allows teachers to represent theoretical concepts through technology, especially in creating new representations of theoretical and mental structures. It is a type of knowledge separate from pedagogical knowledge. The Technological Pedagogical Knowledge (TPK) involves understanding general pedagogical (and didactic) practices that teachers can use while integrating technologies. The focus is on how technology can support various teaching and learning goals. It is independent of specific content and applicable across any subject area. In turn, Technological Pedagogical Content Knowledge (TPACK) represents an integrated understanding that helps teachers select, adapt, and use specific technologies to represent and transform particular content using relevant pedagogical or didactic strategies. It includes aligning objectives, content, methods, and assessments with the selected technology, students’ characteristics, and the teaching environment (Mishra & Koehler, 2006).

TPACK suggests that the process of technology integration is contextually based and shaped by environmental factors (Mishra, 2019). For instance, this process is influenced by teachers’ beliefs about how students learn, their hands-on experiences with what works or does not in their classrooms, different views on the role of technology in learning, teaching methods, and factors related to educational communities, among others.

In recent years, TPACK has been flexibly adapted to new contexts and educational settings, being applied across various subjects, teaching modalities, strategies, and professional profiles. For instance, Polly (2024) applied it in primary school mathematics classrooms with in-service teachers using educational platforms, simulators, and teaching activities. His study confirmed that TPACK can be implemented with diverse technologies and that its effectiveness is largely mediated by school context and teachers’ beliefs. Kuo and Kuo (2024) implemented an adaptation of TPACK-G, based on Hsu et al. (2013) studies, to evaluate pre-service teachers’ knowledge in multiple areas when using digital games, finding that factors such as gender and prior experience with video games influenced their pedagogical and content knowledge levels. Cowan and Farrell (2023) applied TPACK with pre-service teacher mentors through virtual reality environments and found that, although their experience with this technology was limited, they recognized its pedagogical and didactic potential, as well as students’ role in the integration process. Krug et al. (2023) combined various technological tools to model in 3D and create augmented reality applications in a seminar with pre-service science teachers (physics, chemistry, and biology), helping them improve self-efficacy, motivation, and confidence in using this technology for science teaching.

2.2. Intelligent-TPACK Framework

The Intelligent-TPACK framework was proposed by Celik (2023) to adapt the traditional TPACK model to the uses of major AI tools, incorporating activities related to automation and adaptive feedback. In addition, this framework expands the dimensions of the original model by incorporating ethical knowledge regarding the use of AI in education, so that teachers are able to assess whether they can recognize bias, ensure transparency and accountability, and promote equitable and fair learning.

Building on this, the Intelligent-TPACK framework (see Figure 2) proposes the existence of five new types of specific knowledge linked to pedagogy (PK) and content (CK), but within a context that is sensitive to the ethical dimension.

According to Celik (2023), each of the dimensions is described as follows:

“Intelligent-TK tackles the knowledge to interact with AI-based tools and to use fundamental functionalities of AI-based tools. This component aims to measure teachers’ familiarization level with the technical capacities of AI-based tools.

Intelligent-TPK addresses the knowledge of pedagogical affordances of AI-based tools, such as providing personal and timely feedback and monitoring students’ learning. Additionally, Intelligent-TPK evaluates teachers’ understanding of alerting (or notification) and how they interpret messages from AI-based tools.

Intelligent-TCK focuses on the knowledge of field-specific AI tools. It assesses how well teachers incorporate AI tools to update their content knowledge. This component also addresses teachers’ understanding of particular technologies that are best suited for subject-matter learning in their specific field.

Intelligent-TPACK is considered the core area of knowledge. It evaluates teachers’ professional knowledge to choose and use appropriate AI-based tools (e.g., intelligent tutoring systems) for implementing teaching strategies (e.g., monitoring and providing timely feedback) to achieve instructional goals in a specific domain.

Ethics evaluates the teacher’s judgment regarding the use of AI-based tools. The evaluation focuses on transparency, fairness, accountability, and inclusiveness.”

(Celik, 2023, p. 4)

Similarly to TPACK, successfully integrating AI tools into educational practice requires teachers to have a nuanced understanding of how these five components interact. This study specifically focuses on examining the knowledge components related to the technological aspect of Intelligent-TPACK (TK, TCK, TPK, and TPACK).

3. Methods

3.1. Design and Research Questions

This is primarily a quantitative survey-based study, with a descriptive-exploratory scope and a cross-sectional design. To achieve the proposed aim, the study sought to answer the following three research questions:

What levels of technological, pedagogical, content, and ethical knowledge are reported by a sample of teachers from the Metropolitan Region (Chile) regarding the use of AI in education?

Are there significant differences in teachers’ knowledge of AI according to sociodemographic, professional, and disciplinary variables such as gender, age, or subject taught?

What professional teacher profiles emerge from the combination of technological, pedagogical, content, and ethical knowledge regarding the integration of AI in their professional practice?

Answering these questions will enable the identification of trends and gaps in teachers’ knowledge about AI, informing the determination of teacher training and professional development needs.

To implement the research design, we followed three main procedures. First, an extensive literature review was carried out to select, adapt, and validate the Intelligent-TPACK instrument. Second, the validated instrument was administered to a large sample of teachers working in the Metropolitan Region of Chile. Finally, the main trends, factors, and profiles in the responses were analyzed. A detailed description of the three procedures is presented below.

3.2. Implementation of the Adapted Questionnaire

3.2.1. Population and Study Sample

The target population comprised approximately 70,000 active primary and secondary school teachers, distributed across nearly 2500 schools in the Metropolitan Region of Chile (Mineduc, 2024). To contact participants, a public database of institutional emails was obtained from official school websites. Invitations were sent via email, including a link to the Google Forms questionnaire, along with an explanation of the study’s aims, benefits, and ethical considerations. Additional invitations were extended during seminars and conferences attended by teachers, as well as through social media.

The questionnaire was piloted between November and December 2024 with an initial sample of 42 teachers from the Metropolitan Region. This process helped refine the items and scales. Subsequently, the main data collection took place between January and July 2025, using the refined version of the questionnaire, which was distributed via institutional email and shared during academic events and on social media platforms.

Data collection was conducted in the Metropolitan Region for two reasons. First, this region is home to approximately 10 of Chile’s 19 million inhabitants and nearly 50% of the active primary and secondary teaching workforce (Mineduc, 2024). It also has a high density of schools, teacher training centers, and professional development networks, which facilitated the data collection process. Second, since this research was conducted within the framework of a doctoral thesis project, limiting the sample to this region ensured logistical and temporal feasibility. We acknowledge, however, that this decision poses a relevant limitation for the generalization of findings to other contexts. Nonetheless, it provides a fairly broad view of the Chilean teacher population.

Since randomness was not controlled in the invitation process, the sampling design was non-probability (self-selected volunteers contacted by institutional email, events, and social media).

To ensure that the study had a sufficiently large base for analysis, we calculated the finite-population sample size as a planning heuristic (Z = 1.96, p = 0.50, e = 0.04), which yielded a target of n = 596 using Formula (1) (L. Cohen et al., 2018; Tillé, 2020):

(1)n= Z2 · p · (1p)e2·NN1 + Z2 · p · (1p)e2

It is important to emphasize that this computation assumes simple random sampling. Because our design was non-probabilistic, the formula is reported only as a reference for planning and does not justify reporting margins of error or confidence intervals for population inference.

Instead, the achieved sample should be interpreted as compositional–descriptive of the study variables. Descriptive estimates and comparative tests therefore apply strictly within the achieved sample, and not as representative parameters of the teacher population.

Additionally, efforts were made to ensure diversity in terms of demographic variables such as gender, educational level (primary or secondary), teachers’ age, and school type (public or private). Diversity was weighted according to the demographic structure of teachers in the Metropolitan Region. Participation quotas were established based on sociodemographic variables relevant to the Chilean school system, such as gender, educational level, administrative dependence, geographical location, etc.

By legal criteria, teachers working in levels prior to primary education were excluded, in line with Chilean regulations restricting the use of digital technologies with children of those ages.

A total of 712 teachers responded to the survey, exceeding the estimated minimum sample size of 596 participants (see Table 1). Of this total, three responses were excluded from the analysis due to the absence of informed consent, resulting in a final sample of N = 709 valid responses. Additionally, for the gender variable, eight responses that did not fit within the binary categories of male or female were excluded from comparative analyses.

Overall, the obtained sample reflected the distribution of the teacher population in the Metropolitan Region of Chile with reasonable consistency. However, in the case of the variable “school type,” the sample showed a discrepancy compared to the reference population.

3.2.2. Questionnaire Characteristics

The Intelligent-TPACK questionnaire by Celik (2023) was selected because it aligns with the aim of this study and incorporates both ethical aspects and updated pedagogical uses of AI, including feedback and personalized learning. The questionnaire consists of 27 items: five for TK, seven for TPK, four for TCK, seven for Intelligent-TPACK, and four for the ethics dimension. Each item was translated into Spanish while preserving the semantic structure of each statement, in line with the original conceptual definitions. This adaptation included slight wording adjustments to ensure clarity and the use of terms relevant to the local educational context. The Likert scale consists of 5 points, with options ranging from “strongly disagree” to “strongly agree”. In this study, the midpoint option (“neither agree nor disagree”) was removed to obtain clearer response trends. Thus, a 4-point scale was used with the following values: 1 = strongly disagree, 2 = disagree, 3 = agree, and 4 = strongly agree.

The questionnaire was expanded to include two additional sections. The first, an introductory section, contained closed demographic questions to gather teacher characteristics such as age, gender, subject taught, teaching level (primary or secondary), years of experience, and some school characteristics. The second section, following the demographic section, combined open and closed questions to enable teachers to describe their experiences with AI tools across various topics—such as climate change, gender equity, global citizenship, health, and emotional well-being—in different activities, including information search, content creation, rewriting, translation, conversation, data analysis, personalization, and automation, as well as in different areas of teaching, like lesson planning, learning environments, didactics, assessment, curriculum development, and professional responsibilities.

Finally, this adapted version was piloted with 42 Chilean teachers, who completed the questionnaire and provided qualitative feedback. Validation focused primarily on Principal Components Analysis (PCA) and item reduction (PCA, Varimax) to define a new abbreviated version of the instrument. Then, using the main sample (N = 709), we conducted a Confirmatory Factor Analysis (CFA) in AMOS 26 (with maximum likelihood and 5000 bootstrap resamples). In the CFA, we evaluated standardized loadings, significance (p < 0.001), and global fit using χ2, df, χ2/df, CFI, TLI, NFI, GFI, AGFI, RMSEA (90% CI), and SRMR. Detailed evidence (matrices, item-level indices, and proposed modifications) is presented in Appendix A.

Additionally, to reinforce internal consistency, we conducted reliability tests using Cronbach’s alpha and McDonald’s ω, as well as tests of convergent and discriminant validity (CR, AVE).

3.2.3. Data Analysis Strategies

The analysis of closed-ended responses from Intelligent-TPACK was conducted using descriptive and inferential statistical strategies, processed in R 3.6.0+, RSTUDIO 2025.09.0+387, SPSS 29 and AMOS 26.

First, to evaluate internal consistency, we calculated Cronbach’s alpha for each dimension. Second, to determine the distribution type of the data and select the most appropriate tests, normality assumptions were checked using the Kolmogorov–Smirnov and Shapiro–Wilk tests. Third, to identify statistically significant differences between groups of teachers, comparative analyses were performed: the Mann–Whitney U test was applied for dichotomous variables (gender, educational level) and the Kruskal–Wallis H test for variables with more than two categories (administrative dependence, age, teacher evaluation level, and subject taught). Additionally, to compare scores among the dimensions of the Intelligent-TPACK model, the Friedman test was used, complemented by Wilcoxon post hoc analyses, which identified specific contrasts within the model itself. Fourth, to examine the strength and direction of associations between continuous variables (age and years of experience) and the model dimensions, Spearman correlations were calculated. Fifth, to explore the predictive capacity of different variables for each model dimension, multiple linear regression models were developed, reporting ANOVA results, the coefficient of determination (R2), standardized betas, and collinearity diagnostics. Finally, a cluster analysis using the k-means algorithm was conducted, which identified and described four professional teacher profiles based on their responses.

Since the comparative analyses were performed using non-parametric tests, all descriptive summaries report the median (Mdn) and interquartile range (IQR) by dimension and group, with means and standard deviations added only for reference. We also calculated and reported appropriate effect sizes for each test: r for Mann–Whitney, ε2 for Kruskal–Wallis, and Kendall’s W for Friedman. Furthermore, to compare observed levels against theoretical reference points (2.5 and 3.0), one-sample Wilcoxon tests with effect size r were applied.

4. Results

First, as shown in Table 2, the main results of the internal consistency tests for each dimension of the questionnaire—estimated using Cronbach’s α and McDonald’s ω (maximum likelihood and 5000 bootstrap resamples)—were favorable. In all cases, the values exceeded the minimum recommended threshold of 0.70, supporting the reliability of the subscales (Pallant, 2020; George & Mallery, 2021). All subscales reached acceptable or very good levels of internal consistency. The TK dimension yielded the highest indices (α = 0.886; ω = 0.891), indicating very good reliability. TPK showed acceptable values (α = 0.793; ω = 0.805), and TCK, TPACK, and Ethics also demonstrated very good internal consistency, though slightly lower compared to other dimensions. Therefore, these results indicate that each group of items adapted from the Intelligent-TPACK questionnaire presents satisfactory internal reliability.

Meanwhile, the results of the CFA showed significant standardized factor loadings above 0.50 for all items, supporting convergent validity (Hair et al., 2019). The correlations between factors (r, r2) ranged from moderate to high, with several exceeding 0.85. The global fit indices indicated an acceptable fit in CFI/IFI (CFI = 0.921; IFI = 0.921; TLI = 0.896) and a low SRMR (0.042), along with a high RMSEA (RMSEA = 0.113; 90% CI [0.106–0.120]), suggesting that the model could be improved. The complete CFA results are reported in Appendix A.

The means obtained for each dimension ranged between 2.099 (Ethics) and 2.665 (TK), on a 1-to-4 Likert scale, suggesting a general trend between low and moderate in teachers’ perceptions regarding their mastery of technological, pedagogical, and content knowledge related to AI tools.

To determine the type of data distribution and define whether parametric or non-parametric tests should be used in subsequent analyses, three types of normality tests were applied: Kolmogorov–Smirnov with Lilliefors correction, Shapiro–Wilk, and D’Agostino–Pearson (Field, 2024). Kolmogorov–Smirnov with and Shapiro–Wilk results are presented in Table 3.

The results for the five dimensions indicated significant values (p < 0.001), which allows us to reject the null hypothesis of normality in data distribution. Therefore, the distribution of the data is not normal, justifying the use of non-parametric statistics in subsequent analyses.

At the same time, considering that the Kolmogorov–Smirnov and Shapiro–Wilk tests are sensitive to large samples and tend to reject the null hypothesis of normality even with slight deviations, we decided to complement the normality analysis with the omnibus D’Agostino–Pearson test, which integrates sample size, skewness, and kurtosis.

The omnibus D’Agostino–Pearson test results also confirmed the rejection of normality in all five I-TPACK components: TK (K2 = 30.82, p < 0.001), TPK (K2 = 9.42, p = 0.009), TCK (K2 = 12.63, p = 0.002), TPACK (K2 = 25.47, p < 0.001), and Ethics (K2 = 24.00, p < 0.001). All results had p < 0.001, except for TPK, which had p < 0.05. Consequently, given that all three tests consistently rejected the normality hypothesis, non-parametric tests were used for the comparative analyses.

In the next sections, specific results were organized based on the research questions.

4.1. What Levels of Technological, Pedagogical, Content, and Ethical Knowledge Are Reported by a Sample of Teachers from the Metropolitan Region (Chile) Regarding the Use of AI in Education?

Based on the main sample (N = 709), general descriptive statistics were calculated for each dimension of the model (TK, TPK, TCK, TPACK, and Ethics). Table 4 presents the means, standard deviations (SD), medians (Mdn), interquartile ranges, skewness and kurtosis coefficients, along with 95% confidence intervals for the mean. These measures were reported to support the non-parametric analyses used in the subsequent sections.

As shown in Table 4, the medians for TPK, TCK, TPACK, and Ethics are around 2.00, indicating a low to moderate level of perceived knowledge. Only the TK component exceeds the theoretical midpoint (2.5). Similarly, the mean values are highest for TK = 2.67 (SD = 0.91; CI95% [2.60–2.73]), followed by TPK = 2.52 (SD = 0.81; [2.46–2.58]), and lower values for TCK = 2.45 (SD = 0.86; [2.38–2.51]), TPACK = 2.23 (SD = 0.88; [2.16–2.29]), and Ethics = 2.10 (SD = 0.84; [2.04–2.16]). On a 1–4 scale, this reflects predominantly low to moderate levels.

Additionally, general descriptive statistics were calculated for each of the variables analyzed (gender, educational level, school type, age, and school subject). Median (IQR) and Mean (SD) results are presented in the Appendix B tables.

For the gender variable, males and females presented nearly identical medians across all dimensions (Mdn ≈ 2.67 for TK, TPK, and TCK; 2.33 for TPACK and Ethics; IQR = 1.00), suggesting very similar central distributions. However, when observing the means and standard deviations, slight differences emerged in favor of males, who reported slightly higher scores in TK (2.85 (0.84) vs. 2.57 (0.93)) and in Ethics (2.28 (0.91) vs. 2.01 (0.79)).

Regarding educational level, secondary school teachers tended to outperform primary school teachers. The median for secondary was TK = 3.00 (1.00) and TPK = 2.67 (1.33), while for primary it was 2.33 (1.00) in both dimensions. This pattern is also seen in the means, with TK for secondary teachers at 2.84 (0.86) vs. 2.42 (0.92) for primary, and TPK at 2.62 (0.81) vs. 2.37 (0.78).

For school type, the highest values were found in fully private institutions (Private 3), with medians of 3.00 in TK, TPK, and TCK, and 2.67 in TPACK and Ethics, along with means such as TK = 2.90 (0.85) and TCK = 2.69 (0.79). In contrast, municipal schools (Public 1) showed the lowest values, with medians of 2.33 and mean scores around TK = 2.70 (0.84).

Age revealed an interesting pattern. The medians suggest that the 50–60 age group recorded the highest values, with 3.00 in TK, TPK, and TCK, and 2.67 in TPACK and Ethics. However, the means reveal a different trend: younger groups, aged 20–30 and 30–40, achieved better results in TK, with 3.05 (0.73) and 2.95 (0.78), respectively, compared to the 50–60 group with only 2.32 (0.94).

Regarding the school subject variable, the subject Science for Citizenship stood out, with a median of 4.00 (IQR = 1.50–2.00) and high means in TK (3.11 (1.07)) and TPK (3.00 (1.26)). Biology also stood out, with Mdn = 3.33 and TK = 3.14 (0.78), as did Philosophy, with Mdn = 3.33 and TK = 3.13 (0.76). On the opposite end, Indigenous Cultures presented the lowest scores and greatest dispersion, with medians around 1.33–1.50 and TK = 1.92 (1.42).

4.2. Are There Significant Differences in Teachers’ Knowledge of AI According to Sociodemographic, Professional, and Disciplinary Variables Such as Gender, Age, or Subject Taught?

4.2.1. Group Differences Analysis

Possible significant differences in Intelligent-TPACK responses were analyzed according to teachers’ sociodemographic and professional variables. Since the normality tests previously conducted indicated significant deviations from a normal distribution, non-parametric tests were applied. For comparisons between two groups (e.g., gender or teaching level), the Mann–Whitney U test was used, while for variables with more than two categories (such as age, teacher evaluation band, subject, and school type), the Kruskal–Wallis H test was employed.

The results of the Mann–Whitney U test (see Table 5) were interpreted using a significance threshold of p < 0.01 to minimize the risk of Type I error in multiple comparisons (Field, 2024).

Using this criterion, statistically significant differences were found between males and females in four out of the five model dimensions. The only dimension not meeting this stricter threshold was TPK (p = 0.013). Although medians in this dimension are identical, the means differ; and the significant difference suggests shifts in the distribution. Moreover, the effect sizes were small (r = 0.09–0.14), indicating that while the gender differences are statistically significant, their magnitude is very small (these can also be contrasted with the values reported in Appendix B).

Regarding teaching level, the results showed statistically significant differences in all model dimensions, with higher scores among secondary school teachers compared to primary school teachers. In this case, effect sizes were somewhat larger (r = 0.14–0.23), with small to moderate magnitudes, confirming higher outcomes for secondary teachers.

As for the Kruskal–Wallis test results (Table 6), no statistically significant differences were observed by school type, and effect sizes were low in all cases (ε2 ≤ 0.014), indicating that this variable did not have a relevant impact on teachers’ perceptions.

In contrast, the variable age showed statistically significant differences across all five dimensions (p < 0.001), with small to moderate effect sizes (ε2 = 0.03–0.12).

Lastly, subject taught also presented significant differences in all dimensions (p ≤ 0.003), though with small effect sizes (ε2 = 0.028–0.079).

Additionally, to analyze possible differences in teachers’ self-perceptions regarding the Intelligent-TPACK components, Friedman’s non-parametric test for related samples was applied. The main results indicated statistically significant differences among the model’s components (χ2 (4) = 651.309; p < 0.001), suggesting that mean score distributions were not equivalent across the five analyzed dimensions. This result justified applying post hoc comparisons to identify in detail which dimensions significantly differed from one another. For this, Wilcoxon tests with correction for ten multiple comparisons were applied, and effect sizes were calculated to complement the interpretation (see Table 7).

The post hoc comparisons revealed significant differences across all pairs. TK scored the highest, and Ethics the lowest, with the largest gap recorded between these two (Z = −16.141, r = 0.61, large effect). Large effects were also found between TK and TPACK (Z = −14.748, r = 0.55) and between TPK and Ethics (Z = −15.121, r = 0.57). Moderate-to-large effects appeared between TCK and Ethics (r = 0.49) and between TPK and TPACK (r = 0.46). In contrast, the smallest differences were observed between TPK and TCK (Z = −3.623, r = 0.14, small effect) and between TPACK and Ethics (Z = −6.394, r = 0.24, small-to-medium effect).

4.2.2. Correlation Analysis

Since years of teaching experience were recorded as exact values, this variable was treated as continuous. By contrast, age was collected in predefined categories (e.g., 20–30, 30–40, etc.), so it was treated as ordinal in the comparative analyses. Spearman’s test is suitable as it does not assume normality and identifies monotonic associations between variables. By contrast, categorical variables such as gender, school type, or subject cannot be correlated, as their categories are unordered. Table 8 summarizes the most relevant results.

All correlations were negative and statistically significant, indicating that both greater age and more years of teaching experience are associated with lower self-perceptions in each of the Intelligent-TPACK dimensions. The strongest relationship was observed in Technological Knowledge (TK) (rho = −0.349 and rho = −0.330), which corresponds to a moderate negative effect, suggesting that younger and less experienced teachers reported greater self-perceived knowledge in their technological handling of AI tools in professional practice. Small negative correlations were also observed in TCK and TPK (rho ≈ −0.20 to −0.22); indicating that older or more experienced teachers report relatively lower knowledge for integrating technology into their subject areas and teaching practices.

4.2.3. Multiple Linear Regression

This multiple linear regression analysis was conducted to identify which sociodemographic and professional variables (such as age, years of experience, gender, school type, teaching level, career stage, and subject) significantly predict teachers’ perceptions in the Intelligent-TPACK model. We verified the main assumptions of the model: the Durbin–Watson values (1.61–1.73) confirmed the independence of errors; the Q–Q plots and residuals versus predicted values showed no relevant deviations from linearity or normality; homoscedasticity was consistent based on standardized residuals and robust HC3 estimations; multicollinearity was low (VIF < 4; tolerance > 0.20); and finally, no influential cases were detected.

To interpret the regression model, several complementary analyses were applied. First, ANOVA was used to determine whether the overall model was statistically significant, that is, whether the set of predictors significantly explained the dependent variable. Next, the adjusted coefficient of determination (Adjusted R2) was examined, indicating the percentage of variance explained by the model, adjusted for the number of predictors. Standardized coefficients (β) were then analyzed to identify which individual variables had significant predictive weight. Finally, collinearity indicators (VIF and tolerance) were reviewed to verify predictor independence and strengthen the robustness of the model.

As shown in Table 9, ANOVA confirmed that all multiple regression models were statistically significant (p < 0.001), indicating that the set of sociodemographic and professional variables included improved prediction of teachers’ perceptions in each Intelligent-TPACK dimension compared to a null model (without predictors).

This complete model was statistically significant for all Intelligent-TPACK dimensions, indicating that at least one of the sample-level predictors explained variance.

The interpretation of this effect is complemented by the adjusted R2 values, where the dimensions of Intelligent-TPACK and their effects were analyzed. Since human behavior in social sciences is often influenced by factors not measured or analyzed, relatively low adjusted R2 values are expected. Therefore, their interpretation should be contextualized.

Based on this criterion, the models show (see Table 10): TK with a medium effect (Adjusted R2 = 0.173), TCK with a small effect (upper bound) (Adjusted R2 = 0.127), and TPK, TPACK, and Ethics with small effects (Adjusted R2 = 0.069, 0.074, and 0.090, respectively). This indicates that approximately 17.3% to 6.9% of the variance in teachers’ perceptions of their technological, pedagogical, content, and ethical knowledge in AI can be explained by sociodemographic and professional variables. The greatest effects were found in TK (Adjusted R2 = 0.173) and TCK (Adjusted R2 = 0.127).

To deepen the understanding of these results, a detailed analysis was conducted to identify which sociodemographic and professional variables had greater or lesser predictive weight. Table 11 presents the standardized coefficients (β) and statistical significance values for each Intelligent-TPACK predictor.

The multiple regression across the five Intelligent-TPACK dimensions shows that for this sample, the most consistent predictors were teaching level, years of experience, and gender.

Age presented small negative effects in TK (β = −0.202; p = 0.002) and TCK (β = −0.161; p = 0.017), though not in the remaining dimensions.

Years of experience showed small negative effects in TK (β = −0.168; p = 0.009), TPK (β = −0.224; p = 0.001), and TCK (β = −0.141; p = 0.032), and a moderate effect in TPACK (β = −0.294; p < 0.001).

Gender was associated with lower scores, with small effects in TK (β = −0.142; p < 0.001) and Ethics (β = −0.139; p < 0.001).

Teaching level showed positive and significant effects across all dimensions except TPK. In contrast, variables such as school type and subject taught did not yield statistically significant effects in any dimension.

Collinearity indicators (tolerance > 0.2 and VIF < 4) ruled out redundancy issues among predictors, supporting the validity of the models.

Finally, these results suggest that, within this sample, being female and teaching at the primary level in Chile—regardless of school type or subject—were consistent predictors associated with lower self-perceptions of knowledge for using AI tools.

4.3. What Professional Teacher Profiles Emerge from the Combination of Technological, Pedagogical, Content, and Ethical Knowledge Regarding the Integration of AI in Their Professional Practice?

Cluster Analysis

To complement the previous statistical analyses, a cluster analysis was conducted to identify teacher professional profiles according to their levels of competence across the five dimensions of the Intelligent-TPACK. A combined strategy was applied, beginning with an exploratory hierarchical analysis using Ward’s method and Euclidean distance to observe the structure of natural groupings through a dendrogram (see Figure 3).

Internal validity was assessed using the silhouette measure of cohesion and separation obtained through the TwoStep Cluster procedure. The average silhouette score was 0.20, indicating weak to moderate separation among the four clusters. This suggests that although the profiles are distinguishable, they should be considered exploratory. Additionally, to assess the stability of the solution, the analysis was replicated in a random subsample comprising 50% of the cases. The cluster structure and the main patterns of the centers remained consistent with those of the full sample.

Based on these results, the final partition was defined using the K-means algorithm, with an optimal number of four clusters. The choice of four clusters was justified through visual analysis of the hierarchical dendrogram. According to Hair et al. (2019), this type of cut allows the optimal number of groups to be defined when a marked difference in intergroup variance is observed. In this case, the structure evidenced four well-differentiated groupings. The K-means algorithm was implemented in SPSS with default initialization (without setting an explicit seed); listwise deletion and a maximum of 20 iterations (convergence criterion = 0.001).

The analysis was conducted on average scores per dimension, considering that all scales shared the same range (1 to 4), which allowed for direct comparison without additional standardization.

Before performing the cluster analysis, the integrity of the database was verified. No missing values were detected in the five dimensions. Descriptive statistics confirmed that all variables fell within the expected range of the Likert scale (1–4), with standard deviations below 1.0, indicating adequate variability. During the inspection of boxplots and extreme values, no anomalies were observed; therefore, no cases were removed for outlier detection.

Additionally, to enhance the comparative analysis between clusters, the Kruskal–Wallis test was applied for each dimension, along with effect size calculations using Kruskal–Wallis η2. The results confirmed significant differences across all dimensions (TK: H = 536.92, p < 0.001; TPK: H = 494.18, p < 0.001; TCK: H = 538.43, p < 0.001; TPACK: H = 487.02, p < 0.001; Ethics: H = 456.97, p < 0.001). The estimated effect sizes were high in all dimensions (TK = 0.76, TPK = 0.70, TCK = 0.76, TPACK = 0.69, and Ethics = 0.64). Dunn’s post hoc comparisons with Bonferroni correction showed that all pairwise contrasts were significant (p < 0.001), except between Clusters 2 and 4 in the TPACK dimension (p = 1.000).

Finally, four professional teacher profiles were established, as presented in Table 12.

From the data in Table 12, it can be observed that Cluster 1 (21.3%) corresponds to a professional profile with high scores in all dimensions of the Intelligent-TPACK, including the Ethics dimension, with values above option 3 of the Likert scale, reflecting a moderate degree of agreement. Cluster 2 (28.6%) represents a profile with intermediate-low levels and no particularly outstanding areas, as scores are concentrated around option 2, indicating moderate disagreement. Cluster 3 (19.6%) is characterized as a critical profile, with very low values across all dimensions, particularly in integration and ethics, with averages close to option 1, reflecting a high level of disagreement. Finally, Cluster 4 (30.5%) shows a professional profile with high technological knowledge (TK above option 3) but low levels in the other dimensions, once again highlighting weakness in ethics.

Overall, 78.8% of teachers (the sum of Clusters 2, 3, and 4) were classified into exploratory profiles with low knowledge regarding the pedagogical and ethical use of artificial intelligence. This result should be interpreted cautiously given the non-probabilistic sampling and the moderate internal validity, but it still suggests that more than three-quarters of participants show limitations in integrating AI into teaching, which may represent a challenge for initial teacher education and professional development in Chile.

Furthermore, to visually communicate the four clusters according to variables such as gender and years of teaching experience, five scatter plots were created and are presented in Figure 4.

The results in Figure 4 visually display some of the differences identified in the comparative analyses. For example, as years of experience increase, scores tend to decrease across all Intelligent-TPACK dimensions, suggesting less appropriation of AI with age and career trajectory. Regarding gender, some relevant trends are also observed. For instance, in the TK and TPK components, Cluster 4 shows greater predominance among women, whereas Cluster 1 appears more frequently among men.

5. Discussion and Conclusions

With the widespread adoption of new AI tools, forms of interaction and multiple fields of professional knowledge have begun to change (OECD, 2023; Holmes, 2023; Seufert et al., 2021; Mishra et al., 2023). This transformation reaches different levels of teaching work, such as school management, pedagogy, curriculum, and assessment, raising the need for a more complex and better-adjusted pedagogical perspective (Stolpe & Hallström, 2024; K. Wang, 2024; S. Dogan et al., 2025).

In this context, the successful integration of any technology largely depends on teachers’ knowledge (Mishra & Koehler, 2006; Mishra, 2019). However, a gap persists in understanding how this knowledge is specifically articulated with the use of AI tools (Sun et al., 2023; Kim et al., 2022; Luckin et al., 2022; Tan et al., 2024; Celik, 2023). Moreover, at the international level, this situation has created significant challenges and mismatches between the training provided by educational institutions and the real needs expressed by teachers regarding the integration of AI tools (Cukurova et al., 2024; Chiu & Chai, 2020; Ng et al., 2023; Tan et al., 2024; Zawacki-Richter et al., 2019). This has been especially critical in regions with lower research productivity, such as Latin America (Maslej et al., 2024).

Therefore, to help reduce this gap, the purpose of this study was to analyze trends in teachers’ knowledge in Chile using the Intelligent-TPACK framework, which, due to its qualities, has become a robust, flexible, and suitable model to guide this type of analysis (S. Dogan et al., 2025; Celik & Dogan, 2025).

After reviewing the results, we were able to generate discussions and draw conclusions for each research question.

5.1. What Levels of Technological, Pedagogical, Content, and Ethical Knowledge Are Reported by a Sample of Teachers from the Metropolitan Region (Chile) Regarding the Use of AI in Education?

Given that distributions deviated from normality, we report medians and interquartile ranges (Mdn, IQR) in the Discussion, complementing them with means and standard deviations (M, SD) in parentheses. To substantiate claims about ‘low’ or ‘moderate’ levels on a 4-point scale, we ran one-sample Wilcoxon signed-rank tests against the conceptual midpoint (2.5) and, as a stricter benchmark, against 3.0, reporting test statistics, p-values, and effect sizes.

First, based on the results of this study, we found that the levels of knowledge reported by the participating teachers on the use of AI in education are mixed, though with a general tendency toward low levels. AI-TK showed the highest central tendency (Mdn = 2.67, IQR = 1.33; M = 2.67, SD = 0.91), followed by AI-TPK (Mdn = 2.33, IQR = 1.00; M = 2.52, SD = 0.81) and AI-TCK (Mdn = 2.33, IQR = 1.00; M = 2.45, SD = 0.86). Lower central values appeared in AI-TPACK (Mdn = 2.00, IQR = 1.33; M = 2.23, SD = 0.88) and Ethics (Mdn = 2.00, IQR = 1.33; M = 2.10, SD = 0.84). To substantiate these interpretations, one-sample Wilcoxon signed-rank tests were conducted against the conceptual midpoint (2.5) and, as a stricter benchmark (against 3.0). AI-TK was significantly higher than 2.5 (Z = 5.27, p < 0.001, r = 0.20), yet significantly lower than 3.0 (Z = −8.44, p < 0.001, r = 0.32), indicating slightly above-midpoint rather than high knowledge. AI-TPK (Z = 0.67, p = 0.502, r = 0.03) and AI-TCK (Z = −1.63, p = 0.104, r = 0.06) did not differ from 2.5 but were clearly lower than 3.0 (TPK: Z = −13.49, p < 0.001, r = 0.51; TCK: Z = −13.80, p < 0.001, r = 0.52). By contrast, both AI-TPACK and Ethics were significantly below 2.5 (TPACK: Z = −8.12, p < 0.001, r = 0.30; Ethics: Z = −11.63, p < 0.001, r = 0.44) and below 3.0 (TPACK: Z = −17.45, p < 0.001, r = 0.65; Ethics: Z = −19.18, p < 0.001, r = 0.72); confirming low levels in these dimensions.

This aligns with findings from other related studies, although comparisons must be made with caution given the different scales and populations involved. For example, Karatas and Atac (2025) implemented the Intelligent-TPACK on a 7-point scale with 304 pre-service teachers of English as a second language. Their results showed a similar pattern to ours, with relatively higher values in AI-TK and AI-TCK, and somewhat lower scores in AI-TPK and AI-TPACK integration. The weakest dimension was Ethics, confirming that issues such as transparency, bias, and responsible use remain critical and pending areas for strengthening.

Similarly, Gregorio et al. (2024) applied the Intelligent-TPACK (7-point scale) to 212 pre-service teachers and found consistently high levels across all dimensions. Within this overall trend, AI-TCK was the strongest dimension, while AI-TPACK and Ethics ranked lower, again echoing the pattern observed in our data.

Velander et al. (2023) used the Intelligent-TPACK framework in a study with in-service K-12 teachers and university trainers in Sweden. While the study was conducted in a very different context and with a smaller sample, it also highlighted that much AI knowledge is acquired incidentally and often reflects partial or even erroneous conceptions. At the same time, teachers acknowledged the potential benefits of AI for personalization and learning monitoring, consistent with the mixed strengths we observed in Chilean teachers.

Saenen et al. (2024) applied the I-TPACK model in focus groups with teachers and students in Flanders, and Castro et al. (2025) studied rural secondary teachers in Chile. Although their contexts were different, both studies revealed challenges in achieving effective technological integration and, especially, in incorporating ethical considerations—once again consistent with the relative weakness of the Ethics dimension in our study.

It is worth noting that the higher levels associated with the TK component observed in this study do not necessarily stem only from teachers’ “formal” academic training (such as government-led workshops, official courses, peer evaluation sessions, or curricular update conferences), since this knowledge can also be acquired through informal or self-guided means such as online courses, tutorial videos, MOOCs, etc. This nuance may be key to understanding and explaining the gap between TK and the other dimensions.

Finally, more broadly, the low score in the Ethics dimension in this study is relevant, and consistent with the growing international debate on the ethical risks of AI (Holmes et al., 2022; OECD, 2023). This also aligns with previous studies that warned about teachers’ limited ethical preparedness to face AI challenges (Celik, 2023; Karatas & Atac, 2025; Gregorio et al., 2024) and highlights the urgent need to strengthen this dimension in both initial and continuing teacher education programs.

5.2. Are There Significant Differences in Teachers’ Knowledge of AI According to Sociodemographic, Professional, and Disciplinary Variables Such as Gender, Age, or Subject?

We found significant differences across all analyzed variables except for school type (public or private); however, given the unbalanced sample distribution by school type, this result should be interpreted with caution. The highest scores were obtained by young male teachers working in secondary education and teaching subjects such as Biology, Technology, and Natural Sciences—i.e., STEM disciplines. Regression models confirmed that experience, gender, and educational level are the most consistent sample-level predictors of responses in the Intelligent-TPACK. These trends have also been observed in similar studies (Cai et al., 2016; Diao et al., 2024; Møgelvang et al., 2024). For example, Cai et al. (2016) conducted a meta-analysis and concluded that men generally hold more favorable attitudes toward technology use than women, although with a small effect size. Age and experience were also confirmed across all dimensions, suggesting that younger, less experienced teachers perceive themselves as more knowledgeable in the use of AI tools. This trend aligns with recent findings. For example, Zeng et al. (2022) conducted a meta-analysis on teacher self-efficacy and found that age and career stage moderate the relationship between digital self-efficacy and TPACK, while gender does not significantly.

On the other hand, although we did not observe statistically significant differences between public and private school teachers, this finding should not be overestimated or generalized. In our sample, the distribution of teachers according to “school type” did not adequately reflect the actual composition of the teaching population in the Metropolitan Region, which may have introduced bias in the analysis of differences. Therefore, the absence of significant differences should be interpreted with caution, as it may be influenced by methodological procedures. Nonetheless, beyond this limitation, some recent studies have reported that, regardless of whether a school is public or private, the adoption of AI in schools largely depends on the availability of resources, school culture, and institutional support. For example, Zhao et al. (2025), through an empirical study with 202 secondary school teachers in China using an adapted instrument to measure AI usage intention and innovation-diffusion, found that the main factors mediating effective AI use were facilitating conditions (infrastructure/support), career aspirations, and perceived usefulness. This aligns with Kaufman et al. (2025), who, in a report from the Research and Development Corporation (RAND), noted that teachers and principals in schools serving more disadvantaged populations use AI tools less frequently and less effectively than those in better-resourced institutions with stronger institutional guidance. Similarly, Traga and Rocconi (2025), in a survey of 242 primary and secondary teachers, found that teachers identified ongoing professional development workshops and the creation of clear policies and “best practice” guidelines as their main support needs to effectively and responsibly integrate AI into their classrooms.

Although we did not identify studies that specifically examine sociodemographic differences using the Intelligent-TPACK model, there are clear points of connection with the international Teaching and Learning International Survey (TALIS). Similarly to the analyses conducted in our study, the TALIS offers comparative perspectives on teachers, teaching, and learning across countries, including personal and contextual characteristics of teachers such as age, gender, professional experience, and educational level (OECD, 2019, 2020). In general terms, some of the most relevant findings from TALIS 2018 showed that participation in professional development is nearly universal, although gaps persist in critical areas such as the use of technologies for teaching, where many teachers report needing training to learn more advanced applications relevant to their professional practice (OECD, 2019). Additionally, the survey found that collaborative practices among teachers—such as peer feedback and joint professional learning sessions—are infrequent, despite their positive impact in supporting innovative practices in the classroom (OECD, 2020). These patterns are consistent with our findings, where moderating factors such as age, gender, and educational level are associated with varying levels of teachers’ technological knowledge (TK, TCK, TPACK-AI).

In its most recent framework, TALIS 2025 has already incorporated specific questions regarding the use of artificial intelligence tools (OECD, 2024, 2025). It is worth noting that, as of now, this version has not yet been implemented, so no results are available. However, the framework used for its development is publicly accessible. In future rounds, TALIS will ask, for example, whether teachers have received training related to AI, whether they perceive a need to acquire new competencies to integrate these technologies into their pedagogical practice, and their level of agreement with the different roles that AI could play in teaching—such as lesson planning, material adaptation, student support, administrative automation, and ethical dilemmas. In the future, these results could allow for comparisons with our findings to identify new points of convergence and divergence.

5.3. What Professional Teacher Profiles Emerge from the Combination of Technological, Pedagogical, Content, and Ethical Knowledge Regarding AI Integration in Their Professional Practice?

The cluster analysis identified four exploratory professional teacher profiles regarding Intelligent-TPACK.

The first minority group (21.3%) was characterized by high scores across all dimensions, with strong self-perception in integrating AI pedagogically and ethically. The second, majority group (30.5%) corresponds to a profile with high valuation of technological knowledge, strong TK, but clear limitations in other dimensions, especially Ethics. A third group (28.6%) suggest perceptions of low-to-intermediate knowledge, without particularly strong areas, reflecting an emerging profile. Finally, a fourth group (19.6%) represented a critical profile, with very low scores across all dimensions, indicating serious difficulties in incorporating AI into teaching practice.

Aggregating these profiles, we found that 78.8% of the participating teachers fall into profiles with low or very low Intelligent-TPACK levels, suggesting that the main challenges go beyond technological mastery and focus on training to integrate AI critically, fairly, and responsibly. These findings align with recommendations from leading international frameworks, which suggest updating teacher professional development plans to move from a technical perspective—focused solely on technological and programming skills—towards a more critical and comprehensive view of AI’s benefits and risks, connected to multiple dimensions of teaching. In this way, well-prepared teachers can understand, apply, and create with AI tools while also strengthening their ethical competencies (Miao & Cukurova, 2024; Miao et al., 2024; Holmes et al., 2022; OECD, 2023).

The results on professional profiles suggest that there is no single starting point or single path to achieve successful AI integration in Chilean education. On the contrary, it is necessary to design specific training trajectories that address the needs of each profile and gaps associated with specific variables. For example, young men teaching STEM subjects tend to report higher knowledge in nearly all Intelligent-TPACK dimensions. Therefore, training strategies must be sensitive and take into account factors such as gender, age, and subject.

5.4. Key Implications

5.4.1. Implications for Public Policy

The evidence from this study provides an updated view of teachers in the Metropolitan Region of Chile and their perceptions of technological, pedagogical, disciplinary, and ethical knowledge regarding AI.

In general, TK was relatively higher (slightly above the midpoint); however, all other dimensions assessed showed low results. This is reflected in the fact that over 75% of the surveyed teachers fall into professional profiles with limited knowledge to effectively integrate AI pedagogically, in terms of content, and ethically in their professional practice.

This finding may suggest several implications. First, the pattern of results could be associated with greater exposure in teacher education programs (both initial and continuous) in Chile to technical AI competencies—such as software management, the use of digital resources, or basic programming—compared to less emphasis on pedagogical, didactic, and ethical dimensions. However, this interpretation should be considered with caution due to the sampling limitations and the lack of direct evidence regarding curricula and training plans. This would explain, at least in part, why TK is relatively high compared to other dimensions of the model. In line with this, initial and continuous training policies should be projected—or at the very least updated—to address these specific professional gaps, prioritizing the ethical dimension of AI (which emerges as the most urgent challenge) before continuing to deepen technical skills (Miao & Cukurova, 2024; Miao et al., 2024; Holmes et al., 2022; OECD, 2023).

Second, it is possible that initial and continuous training initiatives are simply insufficient, and that the emphasis on technical knowledge stems from non-formal and informal learning spaces, where teachers self-manage and seek resources on their own. For example, through online courses, open platforms, MOOCs, tutorial videos, digital communities, and web-based training. This would not be surprising, considering that AI reached classrooms through students even before teachers had the opportunity to reflect on its use. Therefore, digital self-learning environments may also help explain the high TK scores and should be considered as valid knowledge sources to anchor future professional development initiatives.

In 2021, Chile enacted its “National Artificial Intelligence Policy,” updated in 2024, whose primary aim is to promote the ethical and responsible development and use of AI across society, so that this technology contributes to the country’s new model of development and growth.

An action plan aligned with this public policy was established, structured around three main pillars: enabling factors, development and adoption, and governance and ethics. Ethics, therefore, is embedded in Chile’s foundational proposals. However, these objectives must also be reflected in both initial and continuous teacher training.

As a second key point, the discovery of significant gaps based on gender, age, and educational level highlights the need for differentiated policies that target specific groups within the Chilean teaching population. Training programs on AI should prioritize strengthening the knowledge of women, primary school teachers, and educators over the age of 50, as these groups reported significantly lower levels of knowledge than others.

As a third point, the cluster analysis provided a clearer understanding of the diversity of professional profiles within Chile’s school system. Identifying four clusters reveals that there is no single starting point for AI-related teacher training. While some Chilean teachers appear to have high levels of knowledge across all dimensions, others need to reinforce specific areas such as pedagogical or ethical aspects. Consequently, public policies should adapt and evaluate their training proposals to meet the diverse needs of existing teacher profiles.

Finally, despite the government’s coordinated efforts, AI has yet to be incorporated into Chile’s national teacher curriculum, nor is it present in the main textbooks, plans, or programs used by teachers in their lessons. This suggests that its implementation and research are still in early stages in the country. The risk here is that teacher training in AI remains limited to isolated initiatives, dependent on short courses or workshops with potentially unclear messages and no solid grounding in empirical evidence. To bridge this gap, it is necessary to move toward the progressive integration of AI into curricular artifacts across various subjects and educational levels. Such a policy would help ensure that AI is not only understood as a technological resource, but also as an essential component for the future.

5.4.2. Implications for Future Research

The empirical study conducted in Chile highlights a lack of empirical research in Latin America on teachers’ knowledge of AI, opening a relevant field for comparative and international studies.

We hope that future research will delve into qualitative or mixed methodologies, not only to measure perceptions but also to analyze actual classroom practices and the impact of AI on student learning.

Moreover, it is necessary to expand research into underexplored contexts such as primary education, rural schools, and hybrid or virtual modalities, as these were underrepresented in this study.

Another pending area is to examine how the trajectories of the different teacher profiles identified through cluster analysis evolve over time.

Finally, a key task is to investigate whether future professional development plans help improve the identified teacher profile configurations and lead to enhanced perceptions of knowledge.

These implications are proposed with the foregoing caveats, as they derive from a non-probability sample.

5.5. Limitations and Future Work

A primary limitation of this study is that it followed a non-probabilistic design, which limits the ability to generalize the findings to the entire teaching population in Chile. In this regard, the study is at risk of self-selection bias, as teachers were recruited through institutional emails, seminars, and social media. This strategy implies that the teachers who agreed to participate may differ from those who did not respond, which further limits the generalizability of the results. Therefore, we reiterate that the findings should be interpreted with caution, as they reflect descriptive trends within the participating teacher sample.

Additionally, the sample presents an imbalance in terms of school type compared to the reference population, which may act as a potential attenuator of effects associated with this variable.

Secondly, regarding the validation of the instrument, although we achieved adequate reliability and convergent validity, the global fit of the CFA showed some issues. The CFA exhibited a mixed global fit (acceptable CFI, elevated RMSEA) and limited discriminant validity (high inter-factor correlations), which means that comparisons between dimensions and profiles should be interpreted cautiously. Future studies will require improvements to the instrument (item revision/new items and model re-specification).

Author Contributions

Conceptualization, J.A.; Methodology, J.A., B.Á. and R.A.; Formal Analysis, J.A., B.Á. and R.A.; Investigation, J.A. and B.Á.; Data Curation, J.A.; Writing—Original Draft, J.A.; Writing—Review and Editing, J.A., B.Á. and R.A.; Supervision, B.Á. and R.A.; Project Administration, B.Á. and R.A.; Funding Acquisition, J.A., B.Á. and R.A. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

The study was conducted in accordance with the Decla-ration of Helsinki and approved by the Ethics Committee of the Faculty of Social Sciences, University of Chile 0.67 11 November 2024.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are openly available in Mendeley Data at https://doi.org/10.17632/m52p6kcvxj.1.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:

AIArtificial Intelligence
AIEDArtificial Intelligence in Education
TPACKTechnological Pedagogical and Content Knowledge
TKTechnological Knowledge
TCKTechnological Content Knowledge
TPKTechnological Pedagogical Knowledge
PCKPedagogical Content Knowledge
STEMScience, Technology, Engineering, and Mathematics

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Figures and Tables

Figure 1 Visual representation of the updated TPACK Framework (Mishra, 2019, p. 2).

View Image -

Figure 2 Visual representation of the Intelligent-TPACK Framework (Celik, 2023, p. 8).

View Image -

Figure 3 Dendrogram using Ward Linkage (rescaled distance cluster combine).

View Image -

Figure 4 Scatter plots of responses in each Intelligent-TPACK dimension where (a) TK; (b) TPK; (c) TCK; (d) TPACK; (e) Ethical Knowledge. Responses are distributed according to teachers’ years of experience in gender panels.

View Image -

View Image -

Comparison between the population and the study sample.

Variables Reference Population Sample
N % N %
Gender
Female 46,235 66% 464 65.2%
Male 23,667 34% 237 33.3%
Educational level
Primary 29,205 42% 290 40.9%
Secondary 40,697 58% 419 59.1%
School type
Public 20,787 30% 298 42%
Private 49,115 70% 411 58%
Geography
Urban 67,840 97% 658 92.8%
Rural 2062 3% 51 7.2%
Age (years old)
20–30 6372 9% 100 14.1%
30–40 22,354 32% 241 34.0%
40–50 18,340 26% 196 27.6%
50–60 11,638 17% 102 14.4%
60–70 9449 14% 68 9.6%
70–80 1514 2% 2 0.3%
80+ 235 0.3% 0 0%
Total 69,902 100% 709 100%

Note. Non-probability sample.

Cronbach’s alpha, mean, variance, and interpretation.

Dimension Items Cronbach’s α McDonald’s ω Interpretation
TK 3 0.886 0.891 Very good
TPK 3 0.793 0.805 Acceptable
TCK 3 0.849 0.855 Very good
TPACK 3 0.885 0.888 Very good
Ethics 3 0.882 0.883 Very good

Normality and distribution statistics.

Kolmogorov–Smirnov Shapiro–Wilk
Dimension Z p-Value W p-Value Skewness Kurtosis
TK 0.122 <0.001 0.934 <0.001 −0.352 −0.740
TPK 0.092 <0.001 0.961 <0.001 −0.062 −0.551
TCK 0.092 <0.001 0.948 <0.001 0.016 −0.653
TPACK 0.128 <0.001 0.934 <0.001 0.330 −0.653
Ethics 0.138 <0.001 0.921 <0.001 0.366 −0.526

Descriptive statistics of items (N = 709).

Scale Mdn Q1–Q3 Mean SD Skew Kurt 95% CI Mean
TK 2.67 1.33 2.67 0.91 −0.352 −0.740 [2.60–2.73]
TPK 2.33 1.00 2.52 0.81 −0.062 −0.551 [2.46–2.58]
TCK 2.33 1.00 2.45 0.86 0.016 −0.653 [2.38–2.51]
TPACK 2.00 1.33 2.23 0.88 0.330 −0.653 [2.16–2.29]
Ethics 2.00 1.33 2.10 0.84 0.366 −0.526 [2.04–2.16]

Results of Mann–Whitney U non-parametric tests and effect size.

Dimension U de Mann–Whitney Z Statistic p-Value Decision r
Gender
TK 45,865.500 −3.706 <0.001 Reject H0 0.14
TPK 48,928.000 −2.492 0.013 Do not reject H0 0.09
TCK 47,597.500 −3.022 0.003 Reject H0 0.11
TPACK 47,549.000 −3.048 0.002 Reject H0 0.12
Ethics 45,786.500 −3.769 <0.001 Reject H0 0.14
School level
TK 76,729.500 6.005 <0.001 Reject H0 0.23
TPK 70,448.500 3.646 <0.001 Reject H0 0.14
TCK 74,602.500 5.210 <0.001 Reject H0 0.20
TPACK 72,717.500 4.512 <0.001 Reject H0 0.17
Ethics 75,727.500 5.677 <0.001 Reject H0 0.21

Note. Effect size r was interpreted following J. Cohen’s (2013) benchmarks: ~0.10 = small effect, ~0.30 = medium effect, ≥0.50 = large effect.

Results of Kruskal–Wallis H non-parametric tests and effect size.

Dimension H (K-W) gl p-Value Decision ε2
School type
TK 10.553 4 0.032 Do not reject H0 0.009
TPK 10.534 4 0.032 Do not reject H0 0.008
TCK 14.614 4 0.060 Do not reject H0 0.014
TPACK 4.316 4 0.365 Do not reject H0 0.000
Ethics 5.550 4 0.235 Do not reject H0 0.001
Age
TK 88.969 5 <0.001 Reject H0 0.119
TPK 34.602 5 <0.001 Reject H0 0.042
TCK 78.255 5 <0.001 Reject H0 0.104
TPACK 27.926 5 <0.001 Reject H0 0.033
Ethics 38.705 5 <0.001 Reject H0 0.048
Subject
TK 59.841 16 <0.001 Reject H0 0.063
TPK 50.374 16 <0.001 Reject H0 0.048
TCK 35.987 16 0.003 Reject H0 0.028
TPACK 71.778 16 <0.001 Reject H0 0.079
Ethics 45.154 16 <0.001 Reject H0 0.052

Note. Effect size ε2 was interpreted following the benchmarks by Tomczak and Tomczak (2014): ~0.01 = small, ~0.06 = medium, >0.14 = large.

Results of post hoc comparisons between components.

Comparison Z p-Value r
TK–TPK −6.013 <0.001 0.23
TK–TCK −9.920 <0.001 0.37
TK–TPACK −14.748 <0.001 0.55
TK–Ethics −16.141 <0.001 0.61
TPK–TCK −3.623 <0.001 0.14
TPK–TPACK −12.157 <0.001 0.46
TPK–Ethics −15.121 <0.001 0.57
TCK–TPACK −10.436 <0.001 0.39
TCK–Ethics −13.022 <0.001 0.49
TPACK–Ethics −6.394 <0.001 0.24

Note. Effect size r was interpreted using J. Cohen’s (2013) thresholds: ~0.10 = small, ~0.30 = medium, ≥0.50 = large effect.

Spearman correlations.

Dimension Years of Teacher Experience Age (Years Old)
Spearman’s Rho p-Value Spearman’s Rho p-Value
TK −0.330 <0.001 −0.349 <0.001
TPK −0.218 <0.001 −0.202 <0.001
TCK −0.308 <0.001 −0.321 <0.001
TPACK −0.218 <0.001 −0.189 <0.001
Ethics −0.218 <0.001 −0.219 <0.001

Note. Interpretation of Spearman’s rho followed J. Cohen (2013) guidelines: values around |0.10–0.29| indicate a small effect, |0.30–0.49| a moderate effect, and ≥|0.50| a large effect.

Summary of ANOVA analysis for regression models by dimension.

Dimension df Regression df Residual F p-Value
TK 7 693 21.875 <0.001
TPK 7 693 8.372 <0.001
TCK 7 693 15.489 <0.001
TPACK 7 693 8.944 <0.001
Ethics 7 693 7.066 <0.001

Interpretation of adjusted R2.

Dimension R R2 Adjusted R2 Standard Error Durbin-Watson Interp.
TK 0.425 0.181 0.173 0.82738 1.635 Medium
TPK 0.279 0.078 0.069 0.78226 1.652 Small
TCK 0.368 0.135 0.127 0.80630 1.719 Small
TPACK 0.288 0.083 0.074 0.84628 1.612 Small
Ethics 0.315 0.100 0.090 0.80362 1.732 Small

Note. According to J. Cohen (2013), adjusted R2 values can be interpreted statistically at different effect size levels (translating the thresholds of f2 = 0.02, 0.15, and 0.35 into approximate R2 ranges). Adjusted R2 values ≤ 0.02 are considered negligible, >0.02 and <0.13 indicate small effects, ≈0.13 and <0.26 medium effects, and ≥0.26 large effects.

Coefficients and significance of predictor variables.

Predictor Standardized β p-Value Tolerance VIF Interp.
TK
Age −0.202 0.002 0.277 3.613 Small-mod.
Years of experience −0.168 0.009 0.288 3.475 Small
Gender −0.142 <0.001 0.974 1.026 Small
Teaching level 0.270 <0.001 0.860 1.163 Small-mod.
School type −0.016 0.656 0.866 1.155 Negligible
Subject −0.001 0.914 0.950 1.052 Negligible
TPK
Age 0.000 0.998 0.277 3.613 Negligible
Years of experience −0.224 0.001 0.288 3.475 Small-mod.
Gender −0.084 0.023 0.974 1.026 Small
Teaching level 0.104 0.008 0.860 1.163 Small
School type −0.021 0.592 0.866 1.155 Negligible
Subject 0.002 0.949 0.950 1.052 Negligible
TCK
Age −0.161 0.017 0.277 3.613 Small
Years of experience −0.141 0.032 0.288 3.475 Small
Gender −0.111 0.002 0.974 1.026 Small
Teaching level 0.116 0.002 0.860 1.163 Small
School type −0.033 0.392 0.866 1.155 Negligible
Subject −0.009 0.795 0.950 1.052 Negligible
TPACK
Age 0.106 0.127 0.277 3.613 Negligible
Years of experience −0.294 <0.001 0.288 3.475 Moderate
Gender −0.105 0.005 0.974 1.026 Small
Teaching level 0.128 0.001 0.860 1.163 Small
School type −0.020 0.602 0.866 1.155 Negligible
Subject 0.062 0.098 0.950 1.052 Negligible
Ethics
Age −0.112 0.104 0.277 3.613 Negligible
Years of experience −0.096 0.155 0.288 3.475 Negligible
Gender −0.139 <0.001 0.974 1.026 Small
Teaching level 0.160 <0.001 0.860 1.163 Small-mod.
School type −0.035 0.367 0.866 1.155 Negligible
Subject 0.056 0.128 0.950 1.052 Negligible

Note. According to J. Cohen (2013), adjusted R2 values can be interpreted statistically at different effect size levels (translating the thresholds of f2 = 0.02, 0.15, and 0.35 into approximate R2 ranges). Adjusted R2 values ≤ 0.02 are considered negligible, >0.02 and <0.13 indicate small effects, ≈0.13 and <0.26 medium effects, and ≥0.26 large effects.

Means (SD) by dimension according to cluster.

Dimension Cluster 1 Cluster 2 Cluster 3 Cluster 4
TK 3.59 (0.47) 2.28 (0.40) 1.41 (0.58) 3.19 (0.39)
TPK 3.52 (0.42) 2.31 (0.40) 1.43 (0.46) 2.70 (0.46)
TCK 3.52 (0.48) 2.21 (0.35) 1.22 (0.34) 2.71 (0.45)
TPACK 3.37 (0.48) 2.18 (0.61) 1.09 (0.21) 2.21 (0.49)
Ethics 3.19 (0.55) 2.16 (0.48) 1.09 (0.24) 1.93 (0.59)
N (%) 151 (21.3%) 203 (28.6%) 139 (19.6%) 216 (30.5%)

Appendix A

Validation Procedures for the Adapted Questionnaire

Preliminary Analysis

Using the responses from all teachers who participated in the pilot study (N = 42), we conducted a preliminary analysis of the items. For each item, we calculated descriptive statistics (mean, standard deviation, median, skewness, and kurtosis), as well as the corrected item-total correlation (CITC) as an index of item discrimination. The results are presented below in Table A1.

Descriptive statistics of items (N = 42).

Item Min Max Mean SD Med Skew Kurt CITC
TK1 1 3 1.95 0.582 2.0 −0.001 0.157 0.664
TK2 1 3 1.98 0.468 2.0 −0.090 2.031 0.699
TK3 1 3 1.88 0.504 2.0 −0.243 0.945 0.616
TK4 1 3 2.00 0.494 2.0 0.000 1.514 0.741
TK5 1 3 2.02 0.468 2.0 0.090 2.031 0.824
TPK1 1 4 2.90 0.484 3.0 −1.626 6.373 0.529
TPK2 2 4 2.90 0.484 3.0 −0.274 1.389 0.561
TPK3 2 4 2.98 0.517 3.0 −0.040 1.078 0.548
TPK4 2 4 2.93 0.407 3.0 −0.582 3.317 0.254
TPK5 2 4 2.90 0.484 3.0 −0.274 1.389 0.529
TPK6 1 4 2.83 0.490 3.0 −1.720 4.655 0.485
TPK7 1 4 2.88 0.453 3.0 −2.187 7.855 0.260
TCK1 2 4 2.98 0.604 3.0 0.008 −0.068 0.673
TCK2 2 4 2.76 0.576 2.0 0.039 −0.286 0.781
TCK3 2 4 2.79 0.565 2.0 −0.026 −0.134 0.765
TCK4 2 4 2.79 0.520 2.0 −0.268 0.098 0.664
TPACK1 1 3 1.64 0.577 1.0 0.204 −0.667 0.612
TPACK2 1 3 1.79 0.645 1.0 0.228 −0.585 0.700
TPACK3 1 3 1.60 0.701 1.0 0.761 −0.578 0.816
TPACK4 1 3 1.76 0.532 1.0 −0.192 −0.127 0.563
TPACK5 1 3 1.79 0.565 1.0 −0.026 −0.134 0.779
TPACK6 1 3 1.60 0.701 1.0 0.761 −0.578 0.744
TPACK7 1 4 1.83 0.696 1.0 0.694 1.079 0.614
ETHIC1 1 4 1.45 0.670 1.0 1.714 3.803 0.420
ETHIC2 1 3 1.21 0.470 1.0 2.154 4.213 0.521
ETHIC3 1 3 1.33 0.612 1.0 1.692 1.837 0.450
ETHIC4 1 3 1.19 0.455 1.0 2.416 5.583 0.575

Additionally, Table A2 summarizes the main reliability indices for each scale, including Cronbach’s alpha coefficient, McDonald’s total omega (estimated under the assumption of unidimensionality using the closed-form solution by Hancock and An (2020), and implemented in SPSS via the OMEGA macro developed by Hayes and Coutts (2020)), along with the range of corrected item-total correlations and inter-item correlations.

Reliability of scales.

Item k (Items) Cronbach’s α McDonald’s ω CITC Inter-Item r
TK 5 0.874 0.876 0.616–0.824 0.48–0.74
TPK 7 0.740 0.683 0.254–0.561 −0.06–0.65
TCK 4 0.868 0.871 0.664–0.781 0.53–0.74
TPACK 7 0.891 0.895 0.563–0.816 0.35–0.75
Ethics 4 0.691 0.685 0.420–0.575 0.30–0.60

Overall, the results indicate adequate levels of internal consistency across most scales, with α and ω values close to or above 0.70 and corrected item-total correlations within acceptable ranges.

Verification of factorability

Before conducting the PCA, we estimated the Pearson correlation matrix using listwise deletion (Likert-type scale treated as an interval approximation in this pilot). The determinant was 4.29 × 10−10, reflecting high intercorrelations without perfect collinearity. Sampling adequacy resulted in a KMO = 0.635, and Bartlett’s test of sphericity was significant (χ2(351) = 672.244; p < 0.001), justifying the factor extraction procedure (Kaiser, 1974; Pallant, 2020).

Principal Components Analysis (PCA)

Following the recommendations of Hair et al. (2019), we performed a Principal Component Analysis (PCA) on the 27 items, using orthogonal Varimax rotation (Kaiser normalization). The solution converged in 19 iterations. A total of seven components with eigenvalues > 1 were retained, as shown in the scree plot in Figure A1.

Figure A1 Scree plot.

View Image -

The components explained 74.14% of the variance: C1 16.44%, C2 13.90%, C3 11.33%, C4 9.85%, C5 8.51%, C6 7.12%, C7 6.99% (see Table A3).

Summary of the global PCA and Varimax.

Element Result
Input matrix Pearson correlations (listwise deletion)
Determinant 4.29 × 10−10
Sampling adequacy KMO = 0.635
Sphericity (Bartlett) χ2(351) = 672.244, p < 0.001
Extraction PCA
Rotation Varimax (orthogonal); Kaiser normalization
Retention criteria Eigenvalue > 1 + Scree
Convergence 19 iterations
# of factors retained 7
Explained variance C1 16.44%, C2 13.90%, C3 11.33%, C4 9.85%, C5 8.51%, C6 7.12%, C7 6.99%

The rotated structure was interpretable and consistent with the subscales (see Table A4): a dominant TPACK component; one TK component; a TCK block (with TCK4 loading on a second component of the same domain); TPK divided into two components; and one Ethics component. Some specific cross-loadings were observed (e.g., TPK2, TPACK7, E1), which were resolved during the item reduction process.

Main loadings by component (Varimax rotation; |λ| ≥ 0.50 shown).

Component Highest-Loading Items (Main)
C1–TPACK TPACK5 (0.888), TPACK3 (0.835), TPACK6 (0.754), TPACK2 (0.752), TPACK4 (0.686), TPACK1 (0.632), TPACK7 (0.552)
C2–TK TK5 (0.878), TK4 (0.840), TK2 (0.746), TK3 (0.741), TK1 (0.657)
C3–TCK (core) TCK3 (0.862), TCK1 (0.827), TCK2 (0.802)
C4–TCK (extension) TCK4 (0.620)
C5–TPK (group 1) TPK1 (0.827), TPK5 (0.813), TPK6 (0.764), TPK2 (0.616)
C6–Ethics E4 (0.750), E2 (0.726), E3 (0.707), E1 (0.502)
C7–TPK (group 2) TPK4 (0.813), TPK7 (0.785), TPK3 (0.711)

In the Chilean context, teachers usually work long hours and have limited time to participate in research studies, making it difficult to administer very lengthy questionnaires. Therefore, considering this limitation and aiming to increase teachers’ willingness to participate and improve response rates in the Metropolitan Region, we simplified the instrument by applying two baseline criteria.

First, we retained only those items with communalities greater than or equal to 0.65 (see Table A5), ensuring that each variable contributed meaningfully to the explained variance. Second, we prioritized items that participants in the pilot study reported understanding more easily and for which they expressed no doubts or comments, while maintaining greater conceptual coherence with the Intelligent-TPACK framework.

Communalities of all items (Extraction: PCA).

Item Extraction Selected
TK1 0.643 No
TK2 0.675 No
TK3 0.650 Yes
TK4 0.818 Yes
TK5 0.835 Yes
TPK1 0.773 Yes
TPK2 0.801 No
TPK3 0.747 No
TPK4 0.729 Yes
TPK5 0.832 No
TPK6 0.708 Yes
TPK7 0.810 No
TCK1 0.764 No
TCK2 0.812 Yes
TCK3 0.803 Yes
TCK4 0.705 Yes
TPACK1 0.550 No
TPACK2 0.736 No
TPACK3 0.768 Yes
TPACK4 0.567 No
TPACK5 0.816 Yes
TPACK6 0.743 Yes
TPACK7 0.815 No
E1 0.761 Yes
E2 0.756 No
E3 0.680 Yes
E4 0.718 Yes
Total 27 items 15 items

After this selection procedure, the questionnaire was reduced from 27 to 15 items (see Appendix C), maintaining a balanced distribution across the theoretical dimensions of the Intelligent-TPACK model. Specifically, the TK dimension retained items TK3 (0.650), TK4 (0.818), and TK5 (0.835); TPK retained TPK1 (0.773), TPK4 (0.729), and TPK6 (0.708); TCK retained TCK2 (0.812), TCK3 (0.803), and TCK4 (0.705); TPACK included TPACK3 (0.768), TPACK5 (0.816), and TPACK6 (0.743); and the Ethics dimension retained items E1 (0.761), E3 (0.680), and E4 (0.718).

It is worth noting that, during the reduction from 27 to 15 items, whenever cross-loadings ≥ 0.30 emerged, we prioritized the items with higher communalities and stronger conceptual coherence within each dimension, discarding semantically redundant items.

Confirmatory Factor Analysis

Following the PCA, a Confirmatory Factor Analysis (CFA) was conducted in AMOS 26 using our main sample (N = 709). As shown in Figure A2, we applied a five-factor model with three items per factor (TK, TPK, TCK, TPACK, and Ethics).

Figure A2 Confirmatory Factor Analysis Model for the Five I-TPACK Factors.

View Image -

The CFA revealed high and statistically significant factor loadings (all greater than 0.50; p < 0.001), supporting the convergent validity of the items.

However, the global fit indices yielded mixed results and revealed certain methodological limitations that must be taken into account when interpreting this study’s findings:

χ2(80) = 806.418, p < 0.001; χ2/df = 10.08; CFI = 0.921; TLI = 0.896; NFI = 0.913; GFI = 0.846; AGFI = 0.769; RMSEA = 0.113 (90% CI [0.106–0.120]); SRMR = 0.042.

While CFI, NFI, and SRMR fall within acceptable ranges, the χ2/df ratio and RMSEA indicate a poor model fit.

Lastly, Table A6 summarizes the results of convergent and discriminant validity. All factor loadings were statistically significant (λ > 0.50, p < 0.001), and the values for composite reliability (CR) exceeded the 0.70 threshold, with the average variance extracted (AVE) above 0.50.

Summary of CFA results (standardized loadings, CR, AVE).

Construct Std. Loadings (Min–Max) CR AVE
TK 0.79–0.93 0.891 0.733
TPK 0.63–0.83 0.802 0.577
TCK 0.77–0.88 0.856 0.666
TPACK 0.83–0.88 0.888 0.726
Ethics 0.80–0.87 0.882 0.714

As for discriminant validity, high correlations were observed between some pairs (e.g., TCK–TPK = 0.947; TPK–TPACK = 0.922; Ethics–TPACK = 0.915), indicating limited evidence of discriminant validity.

Appendix B

Complementary Data

Median (IQR) of Intelligent-TPACK Model Dimensions According to Sociodemographic and Professional Variables.

Category TK TPK TCK TPACK Ética
Male 2.67 (1.00) 2.67 (1.00) 2.67 (1.00) 2.33 (1.00) 2.33 (1.00)
Female 2.67 (1.00) 2.67 (1.00) 2.67 (1.00) 2.33 (1.00) 2.33 (1.00)
Primary 2.33 (1.00) 2.33 (1.00) 2.33 (1.00) 2.00 (1.00) 2.00 (1.33)
Secondary 3.00 (1.00) 2.67 (1.33) 2.67 (1.00) 2.33 (1.00) 2.33 (1.33)
Public 1 2.33 (1.00) 2.33 (1.00) 2.33 (1.00) 2.00 (1.00) 2.00 (1.00)
Public 2 2.67 (1.00) 2.67 (1.00) 2.67 (1.00) 2.33 (1.00) 2.33 (1.00)
Private 1 2.67 (1.00) 2.67 (1.00) 2.67 (1.00) 2.33 (1.00) 2.33 (1.00)
Private 2 2.67 (1.00) 2.67 (1.00) 2.67 (1.00) 2.33 (1.00) 2.33 (1.00)
Private 3 3.00 (1.00) 3.00 (1.00) 3.00 (1.00) 2.67 (1.00) 2.67 (1.00)
20–30 years old 2.33 (1.00) 2.33 (1.00) 2.33 (1.00) 2.00 (1.00) 2.00 (1.00)
30–40 years old 3.00 (1.00) 2.67 (1.33) 2.67 (1.00) 2.33 (1.00) 2.33 (1.00)
40–50 years old 3.00 (1.00) 2.67 (1.00) 2.67 (1.00) 2.33 (1.00) 2.33 (1.00)
50–60 years old 3.00 (1.00) 3.00 (1.00) 3.00 (1.00) 2.67 (1.00) 2.67 (1.00)
60–70 years old 2.33 (1.00) 2.33 (1.00) 2.33 (1.00) 2.00 (1.00) 2.00 (1.00)
70–80 years old 2.50 (1.00) 2.83 (1.00) 2.33 (1.00) 2.00 (1.00) 2.33 (1.00)
Visual arts 3.17 (0.42) 2.67 (0.75) 3.00 (0.83) 2.50 (0.92) 2.00 (0.42)
Biology 3.33 (1.34) 2.67 (1.34) 2.67 (1.50) 2.33 (1.33) 2.33 (1.00)
Natural sciences 2.33 (1.58) 2.33 (1.00) 2.50 (1.25) 2.00 (1.17) 2.00 (1.33)
Science for Citizenship 4.00 (1.67) 4.00 (1.50) 4.00 (2.00) 3.67 (2.00) 2.67 (1.83)
Physical Education 2.83 (1.00) 2.67 (1.33) 2.50 (1.25) 2.33 (1.17) 2.50 (1.25)
Philosophy 3.33 (0.58) 2.67 (0.67) 3.00 (1.33) 2.33 (1.00) 2.67 (1.00)
Physics 3.00 (1.33) 3.00 (1.33) 2.67 (1.00) 2.67 (2.00) 2.33 (1.33)
History 2.67 (1.33) 2.33 (1.25) 2.33 (1.33) 2.00 (1.33) 2.00 (1.83)
English 3.17 (1.33) 2.67 (1.00) 2.83 (1.33) 2.33 (1.00) 2.00 (1.00)
Indigenous Culture 1.33 (2.42) 1.50 (2.33) 1.50 (2.00) 1.33 (2.42) 1.50 (2.50)
Communication 2.33 (2.33) 2.33 (1.33) 2.33 (1.67) 2.00 (1.67) 2.00 (1.33)
Mathematics 2.67 (1.00) 2.67 (0.67) 2.33 (1.00) 2.00 (1.00) 2.00 (1.67)
Music 3.33 (0.50) 3.00 (0.50) 2.67 (0.33) 2.33 (0.50) 2.00 (0.67)
Counseling 2.33 (1.67) 2.00 (1.00) 2.33 (1.33) 2.00 (1.67) 2.00 (1.00)
Chemistry 3.33 (1.00) 2.33 (0.83) 2.67 (1.00) 2.00 (0.67) 2.00 (1.00)
Religion 2.67 (1.00) 2.33 (1.00) 2.33 (0.67) 2.33 (1.50) 2.67 (1.50)
Technology 2.67 (2.00) 3.00 (1.00) 2.33 (1.83) 4.00 (1.33) 2.67 (0.67)

Note. Public 1 = Municipal; Public 2 = SLEP; Private 1 = Subsidized; Private 2 = Delegated Administration; Private 3 = Fully Private.

Means (SD) of the Intelligent-TPACK Model Dimensions According to Sociodemographic and Professional Variables.

Variable TK TPK TCK TPACK ETHIC
Male 2.85 (0.84) 2.62 (0.83) 2.58 (0.88) 2.38 (0.90) 2.28 (0.91)
Female 2.57 (0.93) 2.46 (0.80) 2.37 (0.85) 2.15 (0.86) 2.01 (0.79)
Primary 2.42 (0.92) 2.37 (0.78) 2.24 (0.81) 2.06 (0.87) 1.89 (0.80)
Secondary 2.84 (0.86) 2.62 (0.81) 2.59 (0.87) 2.34 (0.86) 2.24 (0.84)
Public 1 2.70 (0.84) 2.57 (0.79) 2.50 (0.81) 2.24 (0.83) 2.10 (0.82)
Public 2 2.63 (0.91) 2.45 (0.83) 2.41 (0.93) 2.28 (0.89) 2.22 (0.90)
Private 1 2.57 (0.94) 2.41 (0.83) 2.34 (0.89) 2.17 (0.92) 2.01 (0.85)
Private 2 2.53 (1.08) 2.56 (0.67) 2.27 (0.90) 2.04 (0.79) 2.12 (0.88)
Private 3 2.90 (0.85) 2.67 (0.80) 2.69 (0.79) 2.36 (0.88) 2.21 (0.82)
20–30 3.05 (0.73) 2.72 (0.82) 2.80 (0.84) 2.39 (0.91) 2.29 (0.87)
30–40 2.95 (0.78) 2.70 (0.75) 2.68 (0.77) 2.38 (0.79) 2.27 (0.79)
40–50 2.53 (0.94) 2.36 (0.86) 2.29 (0.87) 2.12 (0.82) 2.01 (0.83)
50–60 2.32 (0.94) 2.33 (0.73) 2.23 (0.87) 2.01 (0.89) 1.95 (0.86)
60–70 2.02 (0.82) 2.26 (0.75) 1.91 (0.72) 2.10 (1.12) 1.70 (0.81)
70–80 2.50 (0.24) 2.83 (0.71) 2.33 (0.47) 2.00 (0.00) 2.33 (0.47)
Visual arts 3.22 (0.27) 2.72 (0.44) 2.89 (0.50) 2.56 (0.58) 1.94 (0.53)
Biology 3.14 (0.78) 2.86 (0.85) 2.85 (0.82) 2.55 (0.85) 2.35 (0.79)
Natural sciences 2.58 (0.98) 2.46 (0.84) 2.45 (0.86) 2.12 (0.89) 2.00 (0.88)
Science for Citizenship 3.11 (1.07) 3.00 (1.26) 2.94 (1.32) 2.83 (1.35) 2.33 (1.17)
Physical Education 2.71 (0.87) 2.69 (0.79) 2.59 (0.75) 2.51 (0.85) 2.43 (0.79)
Philosophy 3.13 (0.76) 2.67 (0.83) 2.80 (0.92) 2.30 (0.90) 2.47 (0.67)
Physics 2.97 (0.84) 2.71 (0.87) 2.64 (0.96) 2.30 (0.93) 2.28 (0.87)
History 2.57 (0.93) 2.33 (0.85) 2.27 (0.90) 2.15 (0.87) 2.08 (0.90)
English 3.04 (0.78) 2.79 (0.68) 2.70 (0.75) 2.42 (0.82) 2.15 (0.73)
Indigenous Culture 1.92 (1.42) 2.00 (1.36) 2.00 (1.41) 1.92 (1.42) 2.00 (1.41)
Communication 2.35 (1.01) 2.35 (0.86) 2.26 (0.93) 1.98 (0.83) 1.94 (0.83)
Mathematics 2.80 (0.71) 2.61 (0.67) 2.52 (0.72) 2.17 (0.75) 2.03 (0.83)
Music 2.96 (0.56) 2.73 (0.49) 2.57 (0.56) 2.51 (0.57) 2.24 (0.47)
Counseling 2.23 (0.85) 2.01 (0.71) 2.04 (0.75) 2.00 (0.83) 1.89 (0.78)
Chemistry 3.02 (0.91) 2.46 (0.75) 2.48 (0.92) 2.13 (0.79) 2.17 (0.83)
Religion 2.67 (0.58) 2.52 (0.58) 2.67 (0.60) 2.48 (0.82) 2.30 (0.84)
Technology 2.79 (0.88) 2.92 (0.77) 2.69 (0.90) 3.26 (0.92) 2.75 (0.77)

Note. Public 1 = Municipal; Public 2 = SLEP; Private 1 = Subsidized; Private 2 = Delegated Administration; Private 3 = Fully Private.

Appendix C

Adapted Intelligent-TPACK Questionnaire (Spanish Version)

AI-TK
TK3 Sé cómo iniciar una tarea con herramientas de IA mediante texto o voz.
TK4 Tengo conocimientos suficientes para usar varias herramientas de IA.
TK5 Estoy familiarizado con las herramientas de IA y sus capacidades técnicas.
AI-TPK
TPK1 Puedo comprender la contribución pedagógica de las herramientas de IA en mi campo de enseñanza.
TPK4 Sé cómo usar herramientas de IA para monitorear el aprendizaje de mis estudiantes.
TPK6 Puedo comprender las notificaciones de herramientas de IA para apoyar el aprendizaje de mis estudiantes.
AI-TCK
TCK2 Conozco diversas herramientas de IA que son utilizadas por profesionales de mi asignatura.
TCK3 Puedo usar herramientas de IA para comprender mejor los contenidos de asignatura.
TCK4 Sé cómo usar herramientas de IA específicas para mi asignatura.
AI-TPACK
TPACK3 En la enseñanza de mi disciplina, sé cómo utilizar diferentes herramientas de IA para ofrecer retroalimentación en tiempo real.
TPACK5 Puedo impartir lecciones que combinen de manera adecuada el contenido de enseñanza, las herramientas de IA y las estrategias didácticas.
TPACK6 Puedo asumir un rol de liderazgo entre mis colegas en la integración de herramientas de IA en alguna asignatura.
AI-ETHIC
E1 Puedo evaluar en qué medida las herramientas de IA consideran las diferencias individuales de mis estudiantes durante el proceso de enseñanza (por ejemplo, sexo, género, nivel socio económico, etc.).
E3 Puedo comprender la justificación de cualquier decisión tomada por una herramienta basada en IA.
E4 Puedo identificar quiénes son los desarrolladores responsables en el diseño y la toma de decisiones de las herramientas basadas en IA.

References

Adel, A.; Ahsan, A.; Davison, C. ChatGPT promises and challenges in education: Computational and ethical perspectives. Education Sciences; 2024; 14, 8 814. [DOI: https://dx.doi.org/10.3390/educsci14080814]

Alé, J.; Arancibia, M. L. Emerging technology-based motivational strategies: A systematic review with meta-analysis. Education Sciences; 2025; 15, 2 197. [DOI: https://dx.doi.org/10.3390/educsci15020197]

Alé, J.; Ávalos, B.; Araya, R. Scientific practices for understanding, applying and creating with artificial intelligence in K-12 education: A scoping review. Review of Education; 2025; 13, 2 e70098. [DOI: https://dx.doi.org/10.1002/rev3.70098]

Almusharraf, N.; Alotaibi, H. An error-analysis study from an EFL writing context: Human and automated essay scoring approaches. Technology Knowledge And Learning; 2022; 28, 3 pp. 1015-1031. [DOI: https://dx.doi.org/10.1007/s10758-022-09592-z]

Anwar, A.; Rehman, I. U.; Nasralla, M. M.; Khattak, S. B. A.; Khilji, N. Emotions matter: A systematic review and meta-analysis of the detection and classification of students’ emotions in stem during online learning. Education Sciences; 2023; 13, 9 914. [DOI: https://dx.doi.org/10.3390/educsci13090914]

Berryhill, J.; Kok Heang, K.; Clogher, R.; McBride, K. Hello, world: Artificial intelligence and its use in the public sector. OECD working papers on public governance; No. 36 OECD Publishing: 2019; [DOI: https://dx.doi.org/10.1787/726fd39d-en]

Bulathwela, S.; Pérez-Ortiz, M.; Holloway, C.; Cukurova, M.; Shawe-Taylor, J. Artificial intelligence alone will not democratise education: On educational inequality, techno-solutionism and inclusive tools. Sustainability; 2024; 16, 2 781. [DOI: https://dx.doi.org/10.3390/su16020781]

Cai, Z.; Fan, X.; Du, J. Gender and attitudes toward technology use: A meta-analysis. Computers & Education; 2016; 105, pp. 1-13. [DOI: https://dx.doi.org/10.1016/j.compedu.2016.11.003]

Castro, A.; Díaz, B.; Aguilera, C.; Prat, M.; Chávez-Herting, D. Identifying rural elementary teachers’ perception challenges and opportunities in integrating artificial intelligence in teaching practices. Sustainability; 2025; 17, 6 2748. [DOI: https://dx.doi.org/10.3390/su17062748]

Cavalcanti, A. P.; Barbosa, A.; Carvalho, R.; Freitas, F.; Tsai, Y.; Gašević, D.; Mello, R. F. Automatic feedback in online learning environments: A systematic literature review. Computers And Education Artificial Intelligence; 2021; 2, 100027. [DOI: https://dx.doi.org/10.1016/j.caeai.2021.100027]

Cazzaniga, M.; Jaumotte, F.; Li, L.; Melina, G.; Panton, A. J.; Pizzinelli, C.; Rockall, E. J.; Tavares, M. M. Gen-AI: Artificial intelligence and the future of work. IMF Staff Discussion Note; 2024; 2024, 1 1. [DOI: https://dx.doi.org/10.5089/9798400262548.006]

Celik, I. Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Computers In Human Behavior; 2023; 138, 107468. [DOI: https://dx.doi.org/10.1016/j.chb.2022.107468]

Celik, I.; Dogan, S. Intelligent-TPACK for AI-assisted literacy instruction. Reimagining literacy in the age of AI; Chapman and Hall/CRC: 2025; pp. 92-112. [DOI: https://dx.doi.org/10.1201/9781003510635-8]

Chiu, T.; Chai, C. Sustainable curriculum planning for artificial intelligence education: A self-determination theory perspective. Sustainability; 2020; 12, 5568. [DOI: https://dx.doi.org/10.3390/su12145568]

Cohen, J. Statistical power analysis for the behavioral sciences; Routledge: 2013; [DOI: https://dx.doi.org/10.4324/9780203771587]

Cohen, L.; Manion, L.; Morrison, K. Research methods in education; 8th ed. Routledge: 2018; [DOI: https://dx.doi.org/10.4324/9781315456539]

Cowan, P.; Farrell, R. Virtual reality as the catalyst for a novel partnership model in initial teacher education: ITE subject methods tutors’ perspectives on the island of Ireland. Education Sciences; 2023; 13, 3 228. [DOI: https://dx.doi.org/10.3390/educsci13030228]

Cukurova, M.; Kralj, L.; Hertz, B.; Saltidou, E. Professional development for teachers in the age of AI; European Schoolnet: 2024; Available online: https://discovery.ucl.ac.uk/id/eprint/10186881 (accessed on 22 September 2025).

Diao, Y.; Li, Z.; Zhou, J.; Gao, W.; Gong, X. A meta-analysis of college students’ intention to use generative artificial intelligence. arXiv; 2024; [DOI: https://dx.doi.org/10.48550/arxiv.2409.06712] arXiv: 2409.06712

Dimitriadou, E.; Lanitis, A. A critical evaluation, challenges, and future perspectives of using artificial intelligence and emerging technologies in smart classrooms. Smart Learning Environments; 2023; 10, 1 12. [DOI: https://dx.doi.org/10.1186/s40561-023-00231-3]

Dogan, M. E.; Dogan, T. G.; Bozkurt, A. The use of artificial intelligence (AI) in online learning and distance education processes: A systematic review of empirical studies. Applied Sciences; 2023; 13, 5 3056. [DOI: https://dx.doi.org/10.3390/app13053056]

Dogan, S.; Nalbantoglu, U. Y.; Celik, I.; Dogan, N. A. Artificial intelligence professional development: A systematic review of TPACK, designs, and effects for teacher learning. Professional Development in Education; 2025; 51, pp. 519-546. [DOI: https://dx.doi.org/10.1080/19415257.2025.2454457]

Edwards, C.; Edwards, A.; Spence, P. R.; Lin, X. I, teacher: Using artificial intelligence (AI) and social robots in communication and instruction. Communication Education; 2018; 67, 4 pp. 473-480. [DOI: https://dx.doi.org/10.1080/03634523.2018.1502459]

European Commission. High-level expert group on artificial intelligence. The assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment; 2020; Available online: https://data.europa.eu/doi/10.2759/002360 (accessed on 22 September 2025).

Field, A. Discovering statistics using IBM SPSS statistics; 6th ed. SAGE Publications Ltd: 2024.

George, D.; Mallery, P. IBM SPSS Statistics 27 step by step: A simple guide and reference; 17th ed. Routledge: 2021; [DOI: https://dx.doi.org/10.4324/9781003205333]

Grassini, S. Shaping the future of education: Exploring the potential and consequences of AI and ChatGPT in educational settings. Education Sciences; 2023; 13, 7 692. [DOI: https://dx.doi.org/10.3390/educsci13070692]

Gregorio, T. A. D.; Alieto, E. O.; Natividad, E. R.; Tanpoco, M. R. Are preservice teachers “totally PACKaged”? A quantitative study of pre-service teachers’ knowledge and skills to ethically integrate artificial intelligence (AI)-based tools into education. Lecture notes in networks and systems; Springer: 2024; pp. 45-55. [DOI: https://dx.doi.org/10.1007/978-3-031-68660-3_5]

Hair, J. F.; Black, W. C.; Babin, B. J.; Anderson, R. E. Multivariate data analysis; 8th ed. Cengage: 2019.

Hancock, G. R.; An, J. A closed-form alternative for estimating ω reliability under unidimensionality. Measurement: Interdisciplinary Research and Perspectives; 2020; 18, 1 pp. 1-14. [DOI: https://dx.doi.org/10.1080/15366367.2019.1656049]

Hayes, A. F.; Coutts, J. J. Use omega rather than Cronbach’s alpha for estimating reliability. But…. Communication Methods and Measures; 2020; 14, 1 pp. 1-24. [DOI: https://dx.doi.org/10.1080/19312458.2020.1718629]

Holmes, W. The unintended consequences of artificial intelligence and education; Education International: 2023.

Holmes, W.; Porayska-Pomsta, K. The ethics of artificial intelligence in education: Practices, challenges, and debates; Routledge: 2023; [DOI: https://dx.doi.org/10.4324/9780429329067]

Holmes, W.; Porayska-Pomsta, K.; Holstein, K.; Sutherland, E.; Baker, T.; Shum, S. B.; Santos, O. C.; Rodrigo, M. T.; Cukurova, M.; Bittencourt, I. I.; Koedinger, K. R. Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education; 2022; 32, 3 pp. 504-526. [DOI: https://dx.doi.org/10.1007/s40593-021-00239-1]

Hsu, C.; Liang, J.; Chai, C.; Tsai, C. Exploring preschool teachers’ technological pedagogical content knowledge of educational games. Journal of Educational Computing Research; 2013; 49, 4 pp. 461-479. [DOI: https://dx.doi.org/10.2190/ec.49.4.c]

Joo, Y. J.; Park, S.; Lim, E. Factors influencing preservice teachers’ intention to use technology: TPACK, teacher self-efficacy, and technology acceptance model. Journal of Educational Technology & Society; 2018; 21, 3 pp. 48-59.

Kadluba, A.; Strohmaier, A.; Schons, C.; Obersteiner, A. How much C is in TPACK? A systematic review on the assessment of TPACK in mathematics. Educational Studies in Mathematics; 2024; 118, pp. 169-199. [DOI: https://dx.doi.org/10.1007/s10649-024-10357-x]

Kaiser, H. F. An index of factorial simplicity. Psychometrika; 1974; 39, 1 pp. 31-36. [DOI: https://dx.doi.org/10.1007/BF02291575]

Karatas, F.; Atac, B. A. When TPACK meets artificial intelligence: Analyzing TPACK and AI-TPACK components through structural equation modelling. Education And Information Technologies; 2025; 30, pp. 8979-9004. [DOI: https://dx.doi.org/10.1007/s10639-024-13164-2]

Kaufman, J. H.; Woo, A.; Eagan, J.; Lee, S.; Kassan, E. B. Uneven adoption of artificial intelligence tools among U.S. teachers and principals in the 2023–2024 school year; RAND Corporation: 2025; [DOI: https://dx.doi.org/10.7249/RRA134-25]

Kim, S.; Jang, Y.; Choi, S.; Kim, W.; Jung, H.; Kim, S.; Kim, H. Correction to: Analyzing teacher competency with TPACK for K-12 AI education. KI—Künstliche Intelligenz; 2022; 36, 2 187. [DOI: https://dx.doi.org/10.1007/s13218-022-00770-w]

Kitto, K.; Knight, S. Practical ethics for building learning analytics. British Journal of Educational Technology; 2019; 50, 6 pp. 2855-2870. [DOI: https://dx.doi.org/10.1111/bjet.12868]

Koehler, M. J.; Mishra, P. AACTE Committee on Innovation and Technology. Introducción a TPACK. Handbook of technological pedagogical content knowledge (TPCK) for educators; Routledge: 2008; Vol. 1, pp. 3-29.

Krug, M.; Thoms, L.; Huwer, J. Augmented reality in the science classroom—Implementing pre-service teacher training in the competency area of simulation and modeling according to the DiKoLAN framework. Education Sciences; 2023; 13, 10 1016. [DOI: https://dx.doi.org/10.3390/educsci13101016]

Kuo, Y.; Kuo, Y. An exploratory study of pre-service teachers’ perceptions of technological pedagogical content knowledge of digital games. Research And Practice In Technology Enhanced Learning; 2024; 19, 008. [DOI: https://dx.doi.org/10.58459/rptel.2024.19008]

Labadze, L.; Grigolia, M.; Machaidze, L. Role of AI chatbots in education: Systematic literature review. International Journal of Educational Technology in Higher Education; 2023; 20, 1 56. [DOI: https://dx.doi.org/10.1186/s41239-023-00426-1]

Lo, C. K. What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences; 2023; 13, 4 410. [DOI: https://dx.doi.org/10.3390/educsci13040410]

Lorenz, P.; Perset, K.; Berryhill, J. Initial policy considerations for generative artificial intelligence; OECD Artificial Intelligence Papers, 1 OECD Publishing: 2023; [DOI: https://dx.doi.org/10.1787/fae2d1e6-en]

Luckin, R.; George, K.; Cukurova, M. AI for school teachers; Routledge: 2022; [DOI: https://dx.doi.org/10.1201/9781003193173]

Maslej, N.; Fattorini, L.; Perrault, R.; Parli, V.; Reuel, A.; Brynjolfsson, E.; Etchemendy, J.; Ligett, K.; Lyons, T.; Manyika, J.; Niebles, J. C.; Shoham, Y.; Wald, R.; Clark, J.; Lyon, K. Artificial intelligence index report 2024; 7th ed. Stanford University, Human-Centered Artificial Intelligence: 2024; Available online: https://hai-production.s3.amazonaws.com/files/hai_ai-index-report-2024-smaller2.pdf (accessed on 22 September 2025).

Miao, F.; Cukurova, M. AI competency framework for teachers; UNESCO: 2024; [DOI: https://dx.doi.org/10.54675/zjte2084]

Miao, F.; Hinostroza, J. E.; Lee, M.; Isaacs, S.; Orr, D.; Senne, F.; Martinez, A.-L.; Song, K.-S.; Uvarov, A.; Holmes, W.; Vergel de Dios, B. UNESCO. Guidelines for ICT in education policies and masterplans (ED-2021/WS/34); UNESCO: 2022; [DOI: https://dx.doi.org/10.54675/UXRW9380]

Miao, F.; Holmes, W.; Ronghuai, H.; Hui, Z. AI and education: Guidance for policy-makers; UNESCO: 2021; [DOI: https://dx.doi.org/10.54675/pcsp7350]

Miao, F.; Holmes, W. UNESCO. Guidance for generative AI in education and research; UNESCO: 2023; [DOI: https://dx.doi.org/10.54675/EWZM9535]

Miao, F.; Shiohira, K.; Lao, N. AI competency framework for students; UNESCO: 2024; [DOI: https://dx.doi.org/10.54675/JKJB9835]

Mineduc,. Cargos docentes de Chile—Directorio 2024; Datos Abiertos Mineduc: 2024; Available online: https://datosabiertos.mineduc.cl/ (accessed on 22 September 2025).

Mishra, P. Considering contextual knowledge: The TPACK diagram gets an upgrade. Journal of Digital Learning in Teacher Education; 2019; 35, 2 pp. 76-78. [DOI: https://dx.doi.org/10.1080/21532974.2019.1588611]

Mishra, P.; Koehler, M. J. Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record: The Voice of Scholarship in Education; 2006; 108, 6 pp. 1017-1054. [DOI: https://dx.doi.org/10.1111/j.1467-9620.2006.00684.x]

Mishra, P.; Warr, M.; Islam, R. TPACK in the age of ChatGPT and Generative AI. Journal of Digital Learning in Teacher Education; 2023; 39, 4 pp. 235-251. [DOI: https://dx.doi.org/10.1080/21532974.2023.2247480]

Møgelvang, A.; Bjelland, C.; Grassini, S.; Ludvigsen, K. Gender differences in the use of generative artificial intelligence chatbots in higher education: Characteristics and consequences. Education Sciences; 2024; 14, 12 1363. [DOI: https://dx.doi.org/10.3390/educsci14121363]

Ng, D. T. K.; Lee, M.; Tan, R. J. Y.; Hu, X.; Downie, J. S.; Chu, S. K. W. A review of AI teaching and learning from 2000 to 2020. Education And Information Technologies; 2023; 28, 7 pp. 8445-8501. [DOI: https://dx.doi.org/10.1007/s10639-022-11491-w]

Ning, Y.; Zhang, C.; Xu, B.; Zhou, Y.; Wijaya, T. T. Teachers’ AI-TPACK: Exploring the relationship between knowledge elements. Sustainability; 2024; 16, 3 978. [DOI: https://dx.doi.org/10.3390/su16030978]

OECD. TALIS 2018 results (Vol. I): Teachers and school leaders as lifelong learners; OECD Publishing: 2019; [DOI: https://dx.doi.org/10.1787/1d0bc92a-en]

OECD. TALIS 2018 results (Vol. II): Teachers and school leaders as valued professionals; OECD Publishing: 2020; [DOI: https://dx.doi.org/10.1787/19cf08df-en]

OECD. Opportunities, guidelines and guardrails on effective and equitable use of AI in education; OECD Publishing: 2023.

OECD. Teaching and learning international survey (TALIS) 2024: Teacher questionnaire, survey instrument. OECD TALIS 2024 Database; OECD: 2024.

OECD. Teaching and learning international survey (TALIS) 2024 conceptual framework; OECD Publishing: 2025; [DOI: https://dx.doi.org/10.1787/7b8f85d4-en]

Pallant, J. SPSS survival manual: A step by step guide to data analysis using IBM SPSS; 7th ed. Routledge: 2020; [DOI: https://dx.doi.org/10.4324/9781003117452]

Polly, D. Examining TPACK enactment in elementary mathematics with various learning technologies. Education Sciences; 2024; 14, 10 1091. [DOI: https://dx.doi.org/10.3390/educsci14101091]

Popenici, S. A. D.; Kerr, S. Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning; 2017; 12, 1 22. [DOI: https://dx.doi.org/10.1186/s41039-017-0062-8] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30595727]

Saenen, L.; Hermans, K.; Rocha, M. D. N.; Struyven, K.; Emmers, E. Co-designing inclusive excellence in higher education: Students’ and teachers’ perspectives on the ideal online learning environment using the I-TPACK model. Humanities and Social Sciences Communications; 2024; 11, 1 890. [DOI: https://dx.doi.org/10.1057/s41599-024-03417-3]

Seufert, S.; Guggemos, J.; Sailer, M. Technology-related knowledge, skills, and attitudes of pre- and in-service teachers: The current situation and emerging trends. Computers in Human Behavior; 2021; 115, 106552. [DOI: https://dx.doi.org/10.1016/j.chb.2020.106552] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32921901]

Shin, D.; Park, Y. J. Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior; 2019; 98, pp. 277-284. [DOI: https://dx.doi.org/10.1016/j.chb.2019.04.019]

Shulman, L. S. Those who understand: Knowledge growth in teaching. Educational Researcher; 1986; 15, 2 pp. 4-14. [DOI: https://dx.doi.org/10.3102/0013189X015002004]

Shulman, L. S. Knowledge and teaching: Foundations of the new reform. Harvard Educational Review; 1987; 57, 1 pp. 1-22. [DOI: https://dx.doi.org/10.17763/haer.57.1.j463w79r56455411]

Shum, S. J. B.; Luckin, R. Learning analytics and AI: Politics, pedagogy and practices. British Journal of Educational Technology; 2019; 50, 6 pp. 2785-2793. [DOI: https://dx.doi.org/10.1111/bjet.12880]

Sierra, Á. J.; Iglesias, J. O.; Palacios-Rodríguez, A. Diagnosis of TPACK in elementary school teachers: A case study in the colombian caribbean. Education Sciences; 2024; 14, 9 1013. [DOI: https://dx.doi.org/10.3390/educsci14091013]

Stolpe, K.; Hallström, J. Artificial intelligence literacy for technology education. Computers and Education Open; 2024; 6, 100159. [DOI: https://dx.doi.org/10.1016/j.caeo.2024.100159]

Sun, J.; Ma, H.; Zeng, Y.; Han, D.; Jin, Y. Promoting the AI teaching competency of K-12 computer science teachers: A TPACK-based professional development approach. Education and Information Technologies; 2023; 28, 2 pp. 1509-1533. [DOI: https://dx.doi.org/10.1007/s10639-022-11256-5]

Tan, X.; Cheng, G.; Ling, M. H. Artificial intelligence in teaching and teacher professional development: A systematic review. Computers and Education Artificial Intelligence; 2024; 8, 100355. [DOI: https://dx.doi.org/10.1016/j.caeai.2024.100355]

Tillé, Y. Sampling and estimation from finite populations; Wiley: 2020; [DOI: https://dx.doi.org/10.1002/9781119071259]

Tomczak, M.; Tomczak, E. The need to report effect size estimates revisited: An overview of some recommended measures of effect size. Trends in Sport Sciences; 2014; 21, 1 pp. 19-25.

Traga, Z. A.; Rocconi, L. AI literacy: Elementary and secondary teachers’ use of AI-tools, reported confidence, and professional development needs. Education Sciences; 2025; 15, 9 1186. [DOI: https://dx.doi.org/10.3390/educsci15091186]

UNESCO. Beijing consensus on artificial intelligence and education. International Conference on Artificial Intelligence and Education, Planning Education in the AI Era: Lead the Leap; Beijing, China, May 16–18; 2019; Available online: https://unesdoc.unesco.org/ark:/48223/pf0000368303 (accessed on 22 September 2025).

Velander, J.; Taiye, M. A.; Otero, N.; Milrad, M. Artificial intelligence in K-12 education: Eliciting and reflecting on Swedish teachers’ understanding of AI and its implications for teaching & learning. Education and Information Technologies; 2023; 29, 4 pp. 4085-4105. [DOI: https://dx.doi.org/10.1007/s10639-023-11990-4]

Wang, K. Pre-service teachers’ GenAI anxiety, technology self-efficacy, and TPACK: Their structural relations with behavioral intention to design GenAI-Assisted teaching. Behavioral Sciences; 2024; 14, 5 373. [DOI: https://dx.doi.org/10.3390/bs14050373]

Wang, Y.; Nadler, E. O.; Mao, Y.; Adhikari, S.; Wechsler, R. H.; Behroozi, P. Universe machine: Predicting galaxy star formation over seven decades of halo mass with zoom-in simulations. The Astrophysical Journal; 2021; 915, 2 116. [DOI: https://dx.doi.org/10.3847/1538-4357/ac024a]

Williamson, B.; Eynon, R. Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology; 2020; 45, 3 pp. 223-235. [DOI: https://dx.doi.org/10.1080/17439884.2020.1798995]

Yan, L.; Sha, L.; Zhao, L.; Li, Y.; Martinez-Maldonado, R.; Chen, G.; Li, X.; Jin, Y.; Gašević, D. Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology; 2024; 55, pp. 90-112. [DOI: https://dx.doi.org/10.1111/bjet.13370]

Yim, I. H. Y.; Su, J. Artificial intelligence (AI) learning tools in K-12 education: A scoping review. Journal of Computers in Education; 2025; 12, pp. 93-131. [DOI: https://dx.doi.org/10.1007/s40692-023-00304-9]

Zawacki-Richter, O.; Marín, V. I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators?. International Journal of Educational Technology in Higher Education; 2019; 16, 1 pp. 1-27. [DOI: https://dx.doi.org/10.1186/s41239-019-0171-0]

Zeng, Y.; Wang, Y.; Li, S. The relationship between teachers’ information technology integration self-efficacy and TPACK: A meta-analysis. Frontiers in Psychology; 2022; 13, 1091017. [DOI: https://dx.doi.org/10.3389/fpsyg.2022.1091017]

Zhang, K.; Aslan, A. B. AI technologies for education: Recent research & future directions. Computers and Education Artificial Intelligence; 2021; 2, 100025. [DOI: https://dx.doi.org/10.1016/j.caeai.2021.100025]

Zhao, J.; Li, S.; Zhang, J. Understanding teachers’ adoption of AI technologies: An empirical study from Chinese middle schools. Systems; 2025; 13, 4 302. [DOI: https://dx.doi.org/10.3390/systems13040302]

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.