Content area
In recognition of the integration of Information and Communications Technology (ICTs) along with Artificial Intelligence (AI) into education, several educational organizations have recommended cultivating 21st-century digital skills as part of every educational curriculum. The computational thinking skill, which is at the core of 21st-century digital skills, has gained research in several fields, such as physics and psychology (Barkela et al., 2024). However, applied linguistics and computer-assisted language learning (CALL) have overlooked this crucial skill, which might stem from the lack of language teachers’ computational thinking competency. For this reason, in the present study the researchers developed a new theoretical framework, and it’s corresponding scale specifically designed for applied linguistics and CALL, namely the Language Teachers’ Computational Thinking Competency in Computer Assisted Language Learning (LTCCTCALL). Using deductive and inductive methods, the researchers developed the items, followed by a validation process that used the Rasch-Andrich rating scale model (RSM) to assess the scale’s item difficulty and validity using Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA). LTCCTCALL was validated with five factor structures that included 15 items in the Iranian EFL context, involving 273 Iranian in-service language teachers. Based on this result, the study introduced a new theoretical framework and its scale to the fields of CALL, applied linguistics, and the Common European Framework of Reference for Languages (CEFR) to lead the community’s continued growth and to prevent the community from falling behind other fields that incorporate computational thinking skills into our classrooms to have problem-solving approaches with CALL, AI, and chatbots in language class.
Introduction
We live in an era where digital technology permeates every aspect of our lives, facilitating learning, teaching, and everyday activities. On the other hand, this facility requires individuals to acquire certain skills, and competencies, namely 21st-century digital skills as well as 21st-century digital competence, which have been introduced by several educational organizations such as Assessment and Teaching of Twenty-First Century Skills (ATC21S), the Organization for Economic Co-operation and Development (OECD), the European Union (EU), the Computer Science Teachers Association (CSTA), the International Society for Technology in Education (ISTE), and the International Association for the Evaluation of Educational Achievement (IEA), Common Core 21st-Century Skills (Rahimi, 2023; Rahimi & Sevilla-Pavón, 2025a; Kong & Wang, 2023; Yuan et al., 2024). These educational organizations aim to guide teachers and educational settings in equipping every learner with 21st-century digital skills, such as algorithmic thinking, computational thinking skills, design thinking skills, problem-solving, information literacy, intercultural skills, creativity, and, analysis and reflection, to prepare them for the future workplace that will utilize Artificial Intelligence (AI) and Information and Communication Technologies (ICTs; Rahimi & Sevilla-Pavón, 2025b). Furthermore, the OECD in its 2030 paper and the digital education action plan 2021–2027 emphasized the importance of students acquiring 21st-century digital skills to mitigate"the challenges of a volatile, uncertain, complex, and ambiguous world, harnessing digital tools and artificial intelligence"(Caena & Redecker, 2019, p. 3).
The computation thinking skill (CT) is the core of 21st-century digital skills (Rahimi, 2023; Rahimi & Sevilla-Pavón, 2025a; Yuan et al., 2024) and is defined as the skill to formulate, analyze, and solve problems through the use of analytical and algorithmic methods (Bocconi et al., 2016). A significant role for CT in computer science education was played by Wing’s ground-breaking article published in 2006 that defined CT as a skill of solving problems, creating systems, and understanding human behavior through the use of fundamental concepts of computer science (Wing, 2006). As of that date, CT has specialized in the field of Science, Technology, Engineering, and Mathematics (STEM); however, later scholars suggested that CT can serve as a universal problem-solving toolkit that could be applied to a variety of educational domains to help students become computer literate, and digital problem-solver (Falloon, 2024; Geng et al., 2024; Ulrich Hoppe & Werneburg, 2019). For this reason, schools are encouraged to incorporate CT education into their K-12 curricula in order to prepare students for citizenship in the 21st-century. This has led to CT being incorporated into a number of fields, including physic (Gambrell & Brewe, 2024), psychology (Anderson, 2016), and medicine (Musaeus et al., 2017); however, applied linguistics and CALL do not seem to have taken steps to incorporate CT. In spite of this, the CALL figureheads suggested that as technology advances, the CALL should also advance (Colpaert, 2020; Gimeno-Sanz, 2015). Similarly, Rahimi (2023) in his article recommended language teachers to think about integrating 21st-century digital skills into their language teaching procedure and cultivate them, particularly through CALL materials. Despite this, Rahimi & Mosalli (2024) reported that language teachers must already possess 21st-century digital competencies to effectively cultivate 21st-century digital skills. In other words, notional knowledge (i.e., 21st-century digital competency) is a prerequisite to practical knowledge (i.e., 21st-century digital skills). The same holds true for computational thinking skill, as these problem-solving skills can subtly encourage structured thinking while solving problems with ICTs. By deconstructing language concepts into manageable components, students can improve their understanding and retention while solving their problems with ICTs or AI (Rahimi & Sevilla-Pavón, 2025a). This skill is particularly effective when using AI and Chabot to tackle language learning challenges systematically and algorithmically, leading to reliable results rather than blindly implementing the suggestions generated by AI algorithms in their target language context (Rahimi & Sevilla-Pavón, 2025a).
Accordingly, in order to facilitate the integration of computational thinking skills in our field, language teachers must enhance their CT competence so that they can apply CT to their language teaching class. Therefore, to comply with the above recommendations, facilitate this process, and narrow this research gap, in the current study the researchers developed a specialized CT framework, and its scale for language teachers, known as a Language Teacher’s Computational Thinking Competency in Computer Assisted Language Learning (LTCCTCALL).
Literature review
Need for a creative approach in English language teaching with CALL
As time has passed, the role of the teacher has evolved over time. Throughout the last few decades, scholars have noted the effects of continued standardization within the field of education, both at the K-12 level and in higher education (Ellison et al., 2018; Larsson & Sjöberg, 2021). According to them, teacher-proofed curricula, as well as pre-approved textbooks, worksheets, and curriculum scripts, have led to the deprofessionalization of educators (Larsson & Sjöberg, 2021). The implementation of prepackaged lessons may inhibit teachers from developing creative approaches to teaching (Bereczki & Kárpáti, 2021; Debarger et al., 2016), particularly through the integration of ICT and AI. In spite of this, the educational landscape continues to change, and modern teaching is becoming ever more complex, unstructured, and reflective (Larsson & Sjöberg, 2021). Specifically, the rapid development and integration of large language models, chatbots, and technology in education has significantly transformed the way we use technology to solve problems. As noted by Rahimi (2023), teaching with technology has shifted towards using it to solve problems, especially through the integration of chatbots and AI, which are accessible to learners to help them address their challenges. In response to these educational reforms, pedagogical organizations such as Common Core, twenty-first century skills, and NGSS have resulted in a drive to engage students in critical thinking, problem-solving inquiry, and innovation (Rahimi & Sevilla-Pavón, 2025a). This requires teachers to integrate new technologies into their classroom practices and develop new skills—notably 21st-century digital skills. Scholars have argued that teachers must develop instructional lessons in an innovative manner to meet the increasingly complex curriculum expectations (Rahimi, 2023; Kong & Wang, 2023; Yuan et al., 2024). Computational thinking skills, at the core of 21st-century digital skills, may be an appropriate approach. Recent reviews of the literature regarding teachers and computational thinking skills have shown growing interest in this approach (Hallström, 2023; Kafai & Proctor, 2021; Su & Yang, 2023). Despite this growing interest, English language teaching and CALL have overlooked this approach, potentially due to a lack of competency among educators. To address this issue, the researchers in the current study tend to develop the LTCCTCALL framework and its scale.
Need to assess language teachers’ computational thinking competency
Recent studies in applied linguistics and CALL have addressed the significant role of computational thinking skills. For instance, a CT-based model improved students’ abilities to analyze, evaluate, and generate effective writing solutions, demonstrating CT’s value in language education. Wu et al. (2024) investigated the effectiveness of a computational thinking-oriented instruction approach on the outcomes of EFL writing among 58 undergraduate participants in comparison with traditional pedagogical methods. As compared with a control group receiving standard instruction, the experimental group was taught using core CT principles. In this study, two groups were involved: a comparison group and an experimental group. The comparison group completed a 100-min pretest, which assessed writing strategy and high-order thinking strategies (HOTS), while the experimental group also assessed computational thinking strategies (CTS). Following this, the experimental group received extra instruction on computational thinking during an introductory academic writing session. During the learning phase, both groups were given pre-writing tasks and peer feedback. The comparison group followed traditional activities, while the experimental group engaged in CT-infused tasks. Each group then completed a 100-min final writing session and a posttest, assessing writing strategy and HOTS for the comparison group. Additionally, CTS for the experimental group.
In another study, Rottenhofer et al. (2022) found that CT skills such as pattern recognition, decomposition, and abstraction can be helpful in the foreign language learning process in which they might experience complex linguistics problems. To do this, the researchers proposed modeling techniques from computer science (e.g., UML diagrams) as innovative tools for language instruction. They find that modeling enhances learning performance among teachers and students across age groups and is well-received as a teaching strategy through a mixed-methods study. According to Tang et al. (2020), computational thinking can support language acquisition in several ways. In order to enhance language comprehension, it is necessary to break down grammatical structures using algorithms, recognize patterns in order to aid vocabulary acquisition, and apply logical reasoning to improve language comprehension. By identifying and correcting mistakes in their communication, learners engage in a form of testing. Furthermore, CT provides learners with the ability to extract general language rules from specific examples, which leads to a more effective learning process. Yacoub (2016) found that learners in their English for Speakers of Other Languages (ESOL) classes have benefited from the multidisciplinary learning by developing Scratch projects related to the topics covered in the class. To do this, Scratch as a sample tool was tested for constructionist and computational interventions. Meanwhile, Scratch computational environment helped to serve vocabulary accumulation and various ESOL learning strategies indicated in the Adult ESOL Core Curriculum.
A virtual exchange project by Rahimi and Sevilla-Pavón (2025a) aims to develop computational thinking skills in Spanish language learners through various activities. These include reading novels and watching short films featuring iconic characters like Robin Hood to enhance abstraction skills. By analyzing the political themes linked to these characters and identifying historical anachronisms in their portrayals within a virtual museum, the project promotes algorithmic thinking. Instructors guided learners in collaborative problem-solving with AI and virtual reality, facilitating consensus-building, joint decision-making, report submission, interview question preparation, and digital storytelling creation. For evaluation, students assessed different responses and chose the most accurate ones through multiple-choice questions related to their experiences in the virtual reality museum, which incorporated references to target cultures and study projects.
The increasing prevalence of CT in education raises questions about teachers’ competencies in cultivating it (Li et al., 2024; Rahimi & Sevilla-Pavón, 2025a; Şahin et al., 2024). In tandem with this, there is a need to ensure that the teachers’ integration of CT into their instruction is adequately evaluated. In the beginning, CT assessment followed traditional approaches, including tests, attitude questionnaires, or projects and assignments (Song et al., 2021; Ukkonen et al., 2024). In later CT conceptualizations, standardized assessment approaches were adopted, focusing on cognitive, behavioral, and attitude assessments in the form of questionnaires, interviews, and interventions (Çimşir et al., 2023; Laime-Choque et al., 2022). Using these assessment approaches, both the teaching process and the learning outcomes of CT were examined, and it is increasingly incorporated into subject teaching (e.g., biology, physics, social sciences, and digital humanities), which prompted a further shift in assessment methods (Anderson, 2016; Gambrell & Brewe, 2024; Musaeus et al., 2017). Aside from CT’s classroom-based assessments, assessments of teacher education and training were also addressed (Li et al., 2024). The use of CT resources in different and teachers’ CT conceptualizations (like logic thinking, algorithm thinking, decomposition, generalizability, and abstraction) are two main areas of assessment that are getting more attention (Fang et al., 2022; Song et al., 2021). However, the CALL and applied linguistic communities haven’t looked into language teachers’ CT competencies, especially those that are based on CT conceptualizations.
Despite this, a number of gaps are observed in the existing assessment methods. Firstly, CT assessment is mainly developed by STEM researchers to measure CT approaches and implementations (Tsai et al., 2020; Voon et al., 2022). These CT assessment approaches fall short of meeting the demands of classroom procedures, such as the absence of programming and data coding in English language teaching. Secondly, the number of assessment approaches involving in-service teachers is low (Li et al., 2024; Tang et al., 2020). Thirdly, there is an absence of consideration for the cultivation of CT during the educational process, as the majority of these dimensions are evaluated teachers’ CT prior to implementation. Moreover, CT assessment rarely takes into account the teaching dimension of a specific subject when dealing with it outside of a programming environment, such as English language teaching. These gaps serve as a clarifier to the purpose of the current paper, which is to develop and implement a CT scale appropriate for English language teachers in the Iranian EFL context, as well as CT implementations that extend beyond the scope of STEM subjects.
Computational thinking competency in Computer Assisted Language Learning (CTCCALL)
In light of the growing use of CT skills in education, there has been an increased amount of research on teachers and CT cultivation in a variety of fields. For instance, Rich et al. (2020) found in their exploratory study on mathematics teachers’ experiences integrating CT into their classroom activities, which focused on CT factors such as abstraction, algorithmic thinking, automation, debugging, decomposition, and generalization, that teachers expressed satisfaction in designing their classroom activities based on CT. Additionally, Yuan et al. (2024) found that both primary and intermediate school teachers were highly motivated to design classroom activities based on CT, particularly after undergoing professional development, and this heightened their CT competence.;however, the issue of determining how English language teachers design activities in their language classes based on CT has remained unclear, which first requires evaluating their CT competency.
In spite of this, there are still many research gaps to be filled. For one, clarity is needed in the definition of CT competency for language teachers, as every field has its own definition. To clarify, STEM researchers primarily define CT competence as a problem-solving approach that requires teachers to create, modify, and evaluate codes, along with their knowledge of programming concepts, in order to solve students’ problems (Fang et al., 2022; Israel-Fishelson & Hershkovitz, 2022). Therefore, they primarily define it in terms of the problem solver’s use of programming and data coding. In K-12 education and elementary education, it is defined as solving learners’ problems through the use of computers, or ICT based on the CT concepts (Kafai & Proctor, 2021; Tikva & Tambouris, 2021). As highlighted by Kale et al. (2018), teaching CT should “entail the knowledge of using computational thinking tools (technology), knowing which instructional strategies to use to teach computational thinking and the subject matter (pedagogy), and understanding of computational thinking and the subject matter (content)."(p. 575). In the realm of applied linguistics and Computer-Assisted Language Learning (CALL), we defined it as the professional and instructional knowledge that language teachers possess to create challenging as well as engaging language learning environments using CALL, where learners can employ computational thinking skills to tackle language learning tasks effectively. This competency is characterized by five factors, as follows:
Algorithm thinking (AL): Language teachers’ proficiency in designing CALL-based language learning activities necessitates that students utilize their cognitive skills to accomplish tasks in a step-by-step manner.
Abstraction (AB): Language teachers’ proficiency in designing CALL-based language learning activities necessitates that students utilize their critical thinking skills to focus on the key information rather than the details to solve that task (deductive reasoning).
Decomposition (DE): Language teachers’ proficiency in designing CALL-based language learning activities necessitates that students utilize their critical thinking skills to decompose the target task into several manageable parts.
Evaluation (EV): Language teachers’ proficiency in designing CALL-based language learning activities necessitates that students utilize their critical thinking to compare different solutions and select the most appropriate one.
Generalization (GE): Language teachers’ proficiency in designing CALL-based language learning activities necessitates that students utilize their critical thinking to recognize the patterns of how to solve specific language tasks and apply them to other language tasks or transfer them to target language contexts.
Scale development
The researchers developed the LTCCTCALL scale by following and adapting the ten steps described by Slavec and Drnoveek (2012) for scale development, which comprises three main stages: 1) item generation, 2) scale development, and 3) scale evaluation.
Item generation
Prior to starting the item development phase, researchers should identify the field in which they intend to investigate to determine whether there are any computational thinking competency scales based on the field. In order to identify a domain, McCoach et al (2013) recommend following three steps: (a) defining the domain’s purpose and identifying the constructs through literature reviews; (b) ensuring there are no existing instruments that are suitable for this purpose; and (c) describing the domain and providing a preliminary conceptual definition. Consequently, the researcher conducted a search across CALL and applied linguistic domains, discovering that no one had previously developed CT competence for language teachers. Consequently, he initiated the process of item generation. For this reason, he developed the items using a bi phenomenon approach. The first step in the research process was a deductive approach, also known as"logical partitioning"or"classification from above"(Hunt, 1991). This process involved a review of the CT literature, an evaluation of the existing CT scales, and the development of items based on these evaluations, culminating in a total of 39 items. As part of the deductive process of developing items, the researcher reviewed previous scales and CT literature that is extensively associated with STEM education and developed the items based on his perspective. As a result, the researchers implemented an inductive approach to transition from abstract to manifest items, which hold practical value within the study context and field, rather than generating the study scale based on his personal preferences. Clark and Watson (1995) demonstrated that combining both approaches can enhance the face and content validity of the scale. For this purpose, the researchers interviewed ten in-service EFL teachers who had experience teaching language with CALL. Accordingly, to address potential biases and contextual limitations, the researchers employed maximum variation sampling to identify key dimensions of variation among the cases (Patton, 2002). This approach aims to thoroughly explore critical features of a target phenomenon across different contexts, and minimize the potential bias (Patton, 2002). Consequently, they selected seven Iranian EFL teachers for interviews, comprising four males and three females, all with teaching experience ranging from four to nine years. Regarding their experience with CALL materials in the classroom, one had two years, three teachers had three years, and two had four years. Additionally, each participant held a postgraduate degree in applied linguistics, including six PhDs and one degree. They taught English in both larger cities (such as Tabriz, Mashhad, and Ardabil) and smaller towns (like Faruj and Quchan). Table 1 displays the participants’ demographic information.
Table 1. Participants’ demographic information
Name | Gender | Age | Degree | City | Teaching experience | Experience in CALL |
|---|---|---|---|---|---|---|
Teacher 1 | Male | 33 | PhD | Qucahn | 5 | 4 |
Teacher 2 | Male | 32 | PhD | Faruj | 6 | 3 |
Teacher 3 | Female | 32 | PhD | Mashhad | 7 | 4 |
Teacher 4 | Male | 26 | M.A | Mashhad | 4 | 2 |
Teacher 5 | Female | 31 | PhD | Ardabil | 4 | 4 |
Teacher 6 | Male | 31 | PhD | Tabriz | 5 | 3 |
Teacher 7 | Female | 43 | PhD | Quchan | 9 | 3 |
They asked them to discuss their skills in designing language task activities via CALL or provide a challenging situation for language learners with CALL materials to lead language learners to use problem-solving strategies such as computational thinking skills in their language learning activities. For clarification, T3 claimed that “I used Read Theory in my language class, and I give them a text where only part of the information is accurate or useful to answer a final comprehension question (e.g., ‘What is the main cause of Covid-19?”). Encourage them to mark up the text, highlighting only key info and crossing out distractions.” T5 added that “she used ChatGPT and Replika to target interpreting conversation to identify key information where she set up a role play and asked students to chat with a Chabot acting as a character (e.g., a customer in a store, a tourist needing help). The Chabot gives both relevant and irrelevant information. The student must distill the important details to respond correctly.” Based on the above task, we found out that our participants had a competency to apply abstraction. With regards to algorithmic thinking, T1 addressed that “he usually used Word Wall in his language class, where language learners build a sentence in sequential logic by creating drag-and-drop or ordering tasks where students must arrange words or chunks into grammatically correct sentences.” For the decomposition thinking competency, language teachers addressed a variety of language tasks that they can design CALL. For instance, T2 reported that “He sometimes uses Kami in his class and asks language learners to decompose reading texts into main parts such as their setting, problem, key characters, vocabularies, grammars, and their discourse (e.g., moral). T4 added, “He always applies Google Docs and structures a piece of writing by parts such as brainstorming ideas, writing the topic sentence, supporting details, providing examples, and a conclusion.” Regarding the evaluation competency, T7 asserted that, “Through integrating the Edpuzzlezzle app, she leads her students to listen to three different possible responses to a dialogue (e.g., at a restaurant, during a phone call) and answer several questions such as recognizing vocabularies, grammars, tones, and cultural norms.” And at last, regarding the generalization, T7 told that “Edpuzzle allows me to assist my students in identifying the patterns of language for clarification. I provided my students with dialogues (e.g., giving directions, ordering food) and asked them to extract useful phrases or sentence structures. Then, they recreated a similar task using the same structure but with different content. In addition to it, T6 reported that “he usually uses the Wordwall website and creates a drag-and-drop matching game where students categorize sentences by structure (e.g., SVO, SVOO, SVOC), then students identify sentence patterns and write their own examples.” The analysis of the interview and the alignment of items from both phases led to the development of a total of 29 items.
Scale development
Following the development of the items using a bi phenomenon approach, the next step was to assess their content validity. As defined by Hinkin (1995), content validity refers to whether a measure is adequate for assessing the relevant domain. The content validity of items is essential when they claim to measure what they are supposed to measure. According to Guion (1977), a content’s validity can only be determined if it meets three conditions: 1. behavior content must be defined in accordance with a general agreement; 2. it is important that the content corresponds to the measurement; and (3) the items must be consistent with the consensus of professionals and qualified judges evaluating them in light of the targeting definitions, domains, and measurements.
To follow this recommendation, the researchers applied expert judgement and members of the target population to evaluate the content validity. This objective was achieved by inviting experts from a variety of fields, such as CALL, applied linguistics, educational technology, and STEM education. Each of these disciplines was represented by two experts, resulting in an overall panel of eight experts. It was then decided to use the Delphi method since it provides anonymity, controlled feedback, and flexibility in gathering participants from geographically diverse areas (Barrios et al., 2021). Further, this methodology enables experts to maintain their anonymity and freely express their views without being subjected to any pressure to conform to expert opinions (Taylor, 2019). To accomplish this goal, the researcher shared the questionnaire, and experts’ comments regarding the items’ definitional domain, and their items through the email. In the wake of three rounds of expert panel discussions, he analyzed their comments, applied them, and re-shared them with the panel. In conclusion, they agreed with the LTCCTCALL definition and its items. These numbered 15 in total and were measured using five Likert scales, ranging from 1 = totally disagree to 5 = totally agree.
In order to improve the content validity of the item, the researchers used a different approach, just as he did during the item generation process. Hence, Krippendorff’s alpha value was employed to assess the level of agreement between raters or experts in the present study, as it is the most effective method for assessing the level of agreement between raters or experts, particularly for situations involving more than two raters (Marzi et al., 2024). Thus, after the panel discussion, he asked experts to respond to the final version of the questionnaire, which was rated on a 5-point Likert scale, with 1 = slight agreement, 3 = moderate agreement, and 5 = perfect agreement. A Krippendorff’s alpha analysis was then conducted with 1000 bootstrap samples, and an alpha of 0.84 was obtained, exceeding the cutoff point of 0.80 (Marzi et al., 2024) which suggest a high degree of expert consensus on the final scale version as shown in Table 2.
Table 2. Krippendorff’s Alpha Reliability Estimate
Alpha | LL95%CI | UL95%CI | Units | Observers | Pairs | |
|---|---|---|---|---|---|---|
Nominal | 0.845 | 0.789 | 0.894 | 15.000 | 8.000 | 420.000 |
As a means of evaluating the face validity of the survey, it is essential to determine whether survey respondents, end users, or lay respondents find the items meaningful and relevant to the survey’s objectives (Newton & Shaw, 2014). To determine the face validity of the questionnaire, the researcher conducted cognitive interviews, which is the most recommended method of assessing face validity (Willis, 2004). Through this approach, one can ensure that the items produce the intended data without confusion or difficulty understanding them, ensure their clarity, and determine whether they are appropriate and sufficient for the survey objective (Balza et al., 2022). In order to accomplish this, the researchers invited eight in-service language teachers who had a similar background to the target population, administered draft survey questions, and interviewed them to determine whether they understood the survey items and responded accordingly. Upon reaching data saturation, he made minor adjustments to two items on the scale after conducting two rounds of interviews.
In order to further confirm the face validity of the items and ensure the scale is parsimonious, a pilot survey was conducted on 38 in-service language teachers who had experience in teaching English with CALL and were similar to our target participants. In order to evaluate the psychometric quality of the scale, the Rasch-Andrich rating scale model (RSM) via the WIinstep package was used. In this model, the probability of an individual responding to each item is calculated based on the number of latent constructs that the individual possesses (trait level) and the relationship between each item and the latent construct (item difficulty). To standardize item difficulty, this method is widely used. It allows the researcher to determine the effect of adding or deleting a given item or set of items by analyzing the item information and standard error functions for the item pool (Aryadoust et al., 2020; Masters, 1982). As a first metric, the item separation statistics yielded a score of 2.43, surpassing the criterion of 2, and the person reliability score was obtained at 0.86. In addition, infit and outfit mean square statistics were employed to evaluate the fit of the scale items; according to Bond et al. (2020) as well as Boone et al (2013), for a rating scale survey, the reasonable range for infit and outfit is 0.6–1.4. Table 3 illustrates the item difficulty, revealing that the fit and outfit of all items fell within the range of 0.87 to 1.1. This indicates that all items on the scale were of the same difficulty level. Moreover, in order to investigate the construct hierarchy of the scale, the person-item map, or Wright map, was plotted. Figure 1 shows a map that represents the relative difficulty of the items visually.
Table 3. Fit mean square statistics
Item | Infit MNSQ | Outfit MNSQ |
|---|---|---|
EV1 | 0.90 | 0.98 |
EV2 | 1.09 | 1.18 |
EV3 | 1.13 | 1.15 |
AB1 | 1.02 | 0.99 |
AB2 | 1.12 | 1.12 |
AB3 | 1.02 | 0.99 |
GE1 | 0.93 | 0.92 |
GE2 | 0.94 | 0.94 |
GE3 | 0.87 | 0.86 |
DE1 | 0.88 | 0.87 |
DE2 | 1.06 | 1.04 |
DE3 | 0.90 | 0.88 |
AL1 | 0.87 | 0.87 |
AL2 | 1.09 | 1.07 |
AL3 | 0.95 | 0.93 |
[See PDF for image]
Fig. 1
The Wright map
Scale evaluation
Study participants
The study evaluated 273 Iranian English language teachers who used technology in their classrooms met the criteria of a five-to-one observation to variable ratio as recommended by Hair et al. (2006). A total of 225 women (82.5%) and 48 men (17.5%) participated in this study. As shown in Table 4, 76 participants (27.8%) held a Ph.D., 148 (54.2%) held a master’s degree, and 49 (18%) held a bachelor’s degree. Furthermore, the participants specialized primarily in Teaching English as a Foreign Language (TEFL) 213 (78%), followed by smaller groups in Translation 24 (8.8%), Literature 23 (8.4%), and Linguistics 13 (4.8%). In regards to teaching experience, 81 teachers (29.7%) had less than five years’ experience, 69 (25.3%) had five years’ experience, 38 (13.9%) had ten years, and surprisingly, 85 (31.1%) had over ten years’ experience. Moreover, 155 participants (56.8%) in the study had fewer than five years of experience using ICT in teaching, and 69 teachers (25.3%) having five years’ experience, 28 teachers (10.2%) having ten years’ experience, and 21 teachers (7.7%) having more than ten years’ experience using ICT. As part of this nation-wide study, English language teachers were recruited from major metropolitan areas across Iran, such as Tehran, Tabriz, Mashhad, Shiraz, and Esfahan. All participants were employed in private language institutions and possessed an advanced level of English proficiency, equivalent to Level C1 according to the Common European Framework of Reference for Languages (CEFR). In the online questionnaire, the researcher assured participants that their personal information would remain anonymous and that only researchers would have access to it. Aside from that, they did not collect any personal information from the participants because it was neither necessary nor relevant to the study objective.
Table 4. Participants’ demographic information
N | % | ||
|---|---|---|---|
Gender | Male | 48 | 17.5 |
Female | 225 | 82.5 | |
Years | |||
Highest | Ph.D | 76 | 27.8 |
Degree | Master | 148 | 54.2 |
Bachelor | 49 | 18 | |
Major | TEFL | 213 | 78 |
Literature | 23 | 8.4 | |
Translation | 23 | 8.4 | |
Linguistics | 14 | 5.2 | |
Years | |||
Language | 5 > | 81 | 29.7 |
Teaching | 5 | 69 | 25.3 |
Experience | 10 | 38 | 13.9 |
10 < | 85 | 31.1 | |
Years | |||
Language | 5 > | 155 | 56.8 |
Teaching | 5 | 69 | 25.3 |
Experience | 10 | 28 | 10.2 |
with ICT, and CALL | 10 < | 21 | 7.7 |
In the following steps, the researchers conducted an exploratory factor analysis (EFA) using IBM Statistical Package for Social Science (SPSS). The Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy (MSA) value of 0.68 showed an acceptable degree of common variance (Hair et al., 2006). Bartlett’s Test of Sphericity (BTS) was 2240.76 (p < 0.001), indicating significant correlations among variables in the correlation matrix. Then, a principal component analysis was conducted by using varimax rotation and items with factor loadings below 0.50 and communalities below 0.40 should be removed from the scales as recommended by Hair et al. (2006). In accordance with the EFA analysis, a five-factor solution was obtained with factor loadings greater than 0.5 and Eigenvalues greater than 1 as shown in Table 5. Moreover, all of the factors can account for 79.21% of language teachers’ CT competence which is sufficient for the humanistic field.
Table 5. The EFA result of the LTCCTCALL
Factors | Factors | |||||
|---|---|---|---|---|---|---|
AB | AL | DE | GE | EV | Extraction | |
AB1 | 0.897 | 0.102 | − 0.012 | − 0.085 | − 0.023 | 0.823 |
AB2 | 0.921 | 0.021 | − 0.002 | − 0.039 | − 0.029 | 0.850 |
AB3 | 0.905 | − 0.047 | − 0.018 | 0.022 | − 0.052 | 0.825 |
AL1 | 0.019 | 0.892 | 0.015 | 0.026 | 0.034 | 0.798 |
AL2 | − 0.005 | 0.902 | 0.055 | 0.026 | − 0.019 | 0.818 |
AL3 | 0.057 | 0.889 | 0.090 | 0.026 | 0.021 | 0.803 |
GE1 | − 0.054 | 0.041 | 0.049 | 0.859 | 0.110 | 0.757 |
GE2 | − 0.026 | 0.038 | 0.137 | 0.877 | 0.090 | 0.799 |
GE3 | − 0.019 | 0.000 | 0.115 | 0.869 | 0.084 | 0.777 |
EV1 | − 0.032 | − 0.013 | 0.071 | 0.131 | 0.858 | 0.760 |
EV2 | − 0.012 | 0.008 | 0.136 | 0.129 | 0.868 | 0.788 |
EV3 | − 0.056 | 0.040 | 0.015 | 0.028 | 0.850 | 0.729 |
DE1 | − 0.027 | 0.084 | 0.863 | 0.082 | 0.136 | 0.778 |
DE2 | − 0.038 | 0.072 | 0.892 | 0.081 | 0.113 | 0.821 |
DE3 | 0.030 | 0.009 | 0.858 | 0.137 | − 0.022 | 0.757 |
Initial Eigenvalues | 3.46 | 2.62 | 2.19 | 1.85 | 1.75 | |
Percentage variance explained | 23.82 | 17.48 | 14.64 | 12.33 | 11.66 | |
Percentage variance | 23.82 | 40.56 | 55.21 | 67.54 | 79.21 | |
Cumulative |
In the next phase, the researcher used confirmatory factor analysis (CFA), a different type of psychometric testing, that estimates the relationship between latent constructs and their measurement errors. Accordingly, the CFA analysis was run through the partial least square modeling approach package (PLS-SEM).
Using confirmatory factor analysis, another method of psychometric assessment, researchers were able to compare alternative a priori factor structures through systematic fit assessment by estimating the relationship between latent constructs, which were corrected for measurement errors (Morin et al., 2020). Composite reliability and Cronbach’s alpha for each variable were calculated and obtained above 0.7, respectively. Moreover, their convergence validity was evaluated using average shared variances (AVE) that were greater than 0.5, as shown in Table 6. Furthermore, all the factors loadings were found to be above 0.5, as depicted in Table 7.
Table 6. Constructs’ reliability and validity
Cronbach’s alpha (standardized) | Cronbach’s alpha (unstandardized) | Composite reliability (rho_c) | Average variance extracted (AVE) | |
|---|---|---|---|---|
AB | 0.895 | 0.895 | 0.896 | 0.742 |
AL | 0.879 | 0.878 | 0.880 | 0.708 |
DE | 0.857 | 0.856 | 0.857 | 0.671 |
GE | 0.856 | 0.856 | 0.857 | 0.666 |
EV | 0.837 | 0.836 | 0.837 | 0.635 |
Table 7. Result of the constructs’ factor loadings
Latent variables, and constructs | Outer loadings (standardized) |
|---|---|
AB1 < -AB | 0.847 |
AB2 < -AB | 0.905 |
AB3 < -AB | 0.831 |
AL1 < -AL | 0.827 |
AL2 < -AL | 0.854 |
AL3 < -AL | 0.843 |
DE1 < -DE | 0.816 |
DE2 < -DE | 0.883 |
DE3 < -DE | 0.754 |
EV1 < -EV | 0.795 |
EV2 < -EV | 0.867 |
EV3 < EV | 0.723 |
GE1 < -GE | 0.780 |
GE2 < -GE | 0.855 |
GE3 < -GE | 0.812 |
All the factor loadings reported in the Table were statistically significant at alpha <.05
A simultaneous evaluation of the discriminant validity of the items was performed using Fornell and Larcker’s (1981) and Henseler et al.’s (2016) heterotrait-monotrait ratio (HTMT). Based on Table 8, the root of the average variance extracted was greater than 0.5 and had a higher correlation with itself than the other constructs based on Fornell and Larcker’s (1981).
Table 8. Discriminant validity based on using Fornell and Larcker’s (1981)
AB | AL | DE | EV | GE | |
|---|---|---|---|---|---|
AB | 0.861 | ||||
AL | 0.058 | 0.841 | |||
DE | − 0.037 | 0.149 | 0.819 | ||
EV | − 0.089 | 0.037 | 0.240 | 0.797 | |
GE | − 0.087 | 0.076 | 0.271 | 0.273 | 0.816 |
Table 9 also shows the discriminant validity based on HTMT criteria, where the mean value of each construct in the HTMT is less than 0.90, indicating that they differ empirically from one another.
Table 9. Discriminant validity based on HTMT
AB | AL | DE | EV | GE | |
|---|---|---|---|---|---|
AB | |||||
AL | 0.078 | ||||
DE | 0.055 | 0.144 | |||
EV | 0.096 | 0.048 | 0.215 | ||
GE | 0.090 | 0.075 | 0.275 | 0.267 |
As the last step in the CFA phase, the model’s fit index was checked using a number of different metrics. The results showed that the model fit the data well (χ2/df = 1.98; RMSEA = 0.60; NFI = 0.92; CFI = 0.93; TLI = 0.95; SRMR = 0.03; GFI = 93). Figure 2 shows the structure coefficients of the variable and their items.
[See PDF for image]
Fig. 2
The confirmatory factor analysis relative difficulty
Discussion, and implications of the study
This study aimed to develop an assessment scale for in-service language teachers’ Computational Thinking Competencies in Computer-Assisted Language Learning. Based on the findings, both Persian and English versions of the LTCCTCALL have been validated in the Iranian EFL context. The scale comprises five factors: algorithmic thinking, abstraction, generalization, evaluation, and decomposition. This validation addresses recent challenges related to teachers’ competencies in fostering computational thinking skills (Barkela et al., 2024; Li et al., 2024; Palop et al., 2025; Şahin et al., 2024). Additionally, it includes how teachers understand computational thinking concepts (such as logical thinking, algorithmic thinking, decomposition, generalizability, and abstraction), which are key areas of the computational thinking skills that are developed and validated in STEM education (Fang et al., 2022; Song et al., 2021), and we have now created a version of the computational thinking competency tailored for Computer-Assisted Language Learning and applied linguistics, which, developed by its experts (Delphi methodology), has been piloted and validated with in-service English language teachers, allowing for its application in linguistics and CALL.
Furthermore, by validating the LTCCTCALL, the researchers put one step forward in this realm and responded to what Rahimi (2023) highlighted that CALL teachers should teach beyond language teaching skills and cultivate 21st-century digital skills through CALL materials. However, it is impossible for language teachers to cultivate 21st-century digital skills without 21st-century digital competence on their part (Rahimi & Mosalli, 2024). Therefore, to cultivate the fundamental 21st-century digital skill, the researcher developed and designed the LTCCTCALL scale to measure the CT competency of in-service language teachers, thereby assisting our community in cultivating CT skill alongside language skills, and sub skills.
Moreover, the researchers heeded the recommendations provided by pedagogical organizations such as assessment and teaching of twenty-first century skills, the organization for economic co-operation and development the European union, the computer science teachers association, the international society for technology in education, the international association for the evaluation of educational achievement, and the common core 21st-century skills that highly recommended to integrate 21st-century digital skills to the educational curriculum ( Rahimi, 2023; Kong & Wang, 2023; Yuan et al., 2024). As a result of the change in teaching language with technology and artificial intelligence, language teachers have had to take on the role of problem solvers (Rahimi, 2023). The use of prepackaged lessons did little to develop the creativity of teachers (Bereczki & Kárpáti, 2021; Debarger et al., 2016) or motivate students to meet their needs in this digital age (Wild et al., 2023). Since studies have shown that CT is a viable solution as a problem-solving approach with technology in the transition role of teachers (Hallström, 2023; Kafai & Proctor, 2021; Su & Yang, 2023) which being incorporated into (Gambrell & Brewe, 2024), psychology (Anderson, 2016), and medicine (Musaeus et al., 2017). Now, applied linguistics and CALL need to move forward with other fields to show that CT isn’t exclusively for STEM but can also be used in applied linguistics and CALL. This could be done by using the current framework and scale that were made for the first gatekeeper in language teaching classes and might be added as a new theoretical framework for the Council of Europe’s Common European Framework of Reference for Languages (CEFR) to recommend to language pedagogical experts and teachers to integrate CT, like other language skills, into their language class. In the current era of artificial intelligence language learning and chatbot-assisted language learning, it is crucial for language teachers to adopt a problem-solving approach with their students, and LTCCTCALL can provide assistance in this regard.
Limitation of the study
In light of several limitations, this study’s findings must be viewed in a broader context. In this regard, the scale should be validated in other EFL and ESL contexts. It would be helpful to the researchers if they conducted an"adaptation study,"which would apply the current study’s findings to, for example, pre-service language teachers. As a third consideration, future studies could incorporate additional variables to investigate potential variations in LTCCTCALL applications, taking into account language teachers’ psychological factors such as technology acceptance, intention to teach language with CALL, demographic information such as sex, and school type. Furthermore, there is a need to apply qualitative research data to evaluate in-service language teachers computational thinking skills in CALL.
Author contribution
Authors’ contribution: Author 1: Writing original draft—review & editing, Writing—original draft, Visualization, Validation, Supervision, Software, Resources, Project administration, Methodology, Investigation, Formal analysis, Data curation, Conceptualization, review & editing Author 2: Investigation.
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Iranian Declaration, its later amendments, or comparable ethical standards.
Informed consent
Informed consent was obtained from all individual participants included in the study.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Anderson, ND. A call for computational thinking in undergraduate psychology. Psychology Learning & Teaching; 2016; 15,
Aryadoust, V; Ng, LY; Sayama, H. A comprehensive review of Rasch measurement in language assessment: Recommendations and guidelines for research. Language Testing; 2020; 38,
Balza, JS; Cusatis, R; McDonnell, SM; Basir, MA; Flynn, KE. Effective questionnaire design: How to use cognitive interviews to refine questionnaire items. Journal of Neonatal-Perinatal Medicine; 2022; 15,
Barkela, V; Han, A; Weber, AM. Do student teachers experience self-worth threats in computational thinking?. Computers in Human Behavior Reports; 2024; 15, [DOI: https://dx.doi.org/10.1016/j.chbr.2024.100463] 100463.
Barrios, M; Guilera, G; Nuño, L; Gómez-Benito, J. Consensus in the delphi method: What makes a decision change?. Technological Forecasting and Social Change; 2021; 163, [DOI: https://dx.doi.org/10.1016/j.techfore.2020.120484] 120484.
Bereczki, EO; Kárpáti, A. Technology-enhanced creativity: A multiple case study of digital technology-integration expert teachers’ beliefs and practices. Thinking Skills and Creativity; 2021; 39, [DOI: https://dx.doi.org/10.1016/j.tsc.2021.100791] 100791.
Bocconi, S., Chioccariello, A., Dettori, G., Ferrari, A., & Engelhardt, K. (2016). Developing computational thinking in compulsory education - implications for policy and practice. https://doi.org/10.2791/792158
Boone, W. J., Staver, J. R., & Yale, M. S. (2013). Rasch analysis in the human sciences. Springer Science & Business Media.
Bond, T., Yan, Z., & Heene, M. (2020). Applying the rasch model: Fundamental measurement in the human sciences. Routledge.
Caena, F; Redecker, C. Aligning teacher competence frameworks to 21st century challenges: The case for the European Digital Competence Framework for Educators. European Journal of Education; 2019; 54,
Çimşir, S; Kalelioğlu, F; Gülbahar, Y. Perceptions of primary school teachers on interdisciplinary computational thinking skills training. Informatics in Education; 2023; [DOI: https://dx.doi.org/10.15388/infedu.2024.16]
Clark, LA; Watson, D. Constructing validity: Basic issues in objective scale development. Psychological Assessment; 1995; 7,
Colpaert, J. Editorial position paper: How virtual is your research?. Computer Assisted Language Learning; 2020; 33,
Debarger, AH; Penuel, WR; Moorthy, S; Beauvineau, Y; Kennedy, CA; Boscardin, CK. Investigating purposeful science curriculum adaptation as a strategy to improve teaching and learning. Science Education; 2016; 101,
Ellison, S; Anderson, AB; Aronson, B; Clausen, C. From objects to subjects: Repositioning teachers as policy actors doing policy work. Teaching and Teacher Education; 2018; 74, pp. 157-169. [DOI: https://dx.doi.org/10.1016/j.tate.2018.05.001]
Falloon, G. Advancing young students’ computational thinking: An investigation of structured curriculum in early years primary schooling. Computers & Education; 2024; 216, [DOI: https://dx.doi.org/10.1016/j.compedu.2024.105045] 105045.
Fang, J-W; Shao, D; Hwang, G-J; Chang, S-C. From critique to computational thinking: A peer-assessment-supported problem identification, flow definition, coding, and testing approach for computer programming instruction. Journal of Educational Computing Research; 2022; 60,
Fornell, C; Larcker, D. Erratum: Structural equation models with unobservable variables and measurement error: Algebra and statistics. Journal of Marketing Research; 1981; 18,
Gambrell, J; Brewe, E. Analyzing interviews on computational thinking for introductory physics students: Toward a generalized assessment. Physical Review Physics Education Research; 2024; [DOI: https://dx.doi.org/10.1103/physrevphyseducres.20.010128]
Geng, Z; Zeng, B; Islam, AYMA; Zhang, X; Huang, J. Validating a measure of computational thinking skills in Chinese kindergartners. Education and Information Technologies; 2024; [DOI: https://dx.doi.org/10.1007/s10639-024-13100-4]
Gimeno-Sanz, A. Moving a step further from “integrative CALL”. What’s to come?. Computer Assisted Language Learning; 2015; 29,
Guion, RM. Content validity—the source of my discontent. Applied Psychological Measurement; 1977; 1,
Hallström, J. (2023). Introduction. In Programming and Computational Thinking in Technology Education (pp. 1–9). BRILL. https://doi.org/10.1163/9789004687912_001
Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2006). Multivariate data analysis (6th edn.). Pearson.
Henseler, J; Hubona, G; Ray, PA. Using PLS path modeling in new technology research: Updated guidelines. Industrial Management & Data Systems; 2016; 116,
Israel-Fishelson, R; Hershkovitz, A. Studying interrelations of computational thinking and creativity: A scoping review (2011–2020). Computers & Education; 2022; 176, [DOI: https://dx.doi.org/10.1016/j.compedu.2021.104353] 104353.
Kafai, YB; Proctor, C. A revaluation of computational thinking in K–12 education: Moving toward computational literacies. Educational Researcher; 2021; 51,
Kong, S-C; Wang, Y-Q. Monitoring cognitive development through the assessment of computational thinking practices: A longitudinal intervention on primary school students. Computers in Human Behavior; 2023; 145, [DOI: https://dx.doi.org/10.1016/j.chb.2023.107749] 107749.
Kong, S-C; Lai, M; Sun, D. Teacher development in computational thinking: Design and learning outcomes of programming concepts, practices and pedagogy. Computers & Education; 2020; 151, [DOI: https://dx.doi.org/10.1016/j.compedu.2020.103872] 103872.
Laime-Choque, A. M., Mamani-Calcina, J. G., Cardona-Reyes, H., Ponce-Aranibar, M. D. P., Vera-Vasquez, C. G., & Espinoza-Suarez, S. (2022). Attitude towards Computational Thinking of in-service teachers. In: 2022 XII International Conference on Virtual Campus (JICV), 42, 1–5. https://doi.org/10.1109/jicv56113.2022.9934274
Larsson, C; Sjöberg, L. Academized or deprofessionalized?– Policy discourses of teacher professionalism in relation to research-based education. Nordic Journal of Studies in Educational Policy; 2021; 7,
Li, X; Sang, G; Valcke, M; van Braak, J. The development of an assessment scale for computational thinking competence of in-service primary school teachers. Journal of Educational Computing Research; 2024; [DOI: https://dx.doi.org/10.1177/07356331241254575]
Marzi, G; Balzano, M; Marchiori, D. K-alpha calculator–Krippendorff’s alpha calculator: A user-friendly tool for computing Krippendorff’s alpha inter-rater reliability coefficient. MethodsX; 2024; 12, [DOI: https://dx.doi.org/10.1016/j.mex.2023.102545] 102545.
Masters, GN. A rasch model for partial credit scoring. Psychometrika; 1982; 47,
McCoach, D. B., Gable, R. K., & Madura, J. P. (2013). Instrument development in the affective domain: School and corporate applications. Springer Science & Business Media.
Morin, A. J. S., Myers, N. D., & Lee, S. (2020). Modern factor analytic techniques. Handbook of Sport Psychology, 1044–1073. https://doi.org/10.1002/9781119568124.ch51
Musaeus, P., Tatar, D., & Rosen, M. (2017). Medical computational thinking: Computer scientific reasoning in the medical curriculum. In Emerging Research, Practice, and Policy on Computational Thinking (pp. 85–98). Springer International Publishing. https://doi.org/10.1007/978-3-319-52691-1_6
Newton, P., & Shaw, S. (2014). Validity in educational and psychological assessment. SAGE.
Palop, B; Díaz, I; Rodríguez-Muñiz, LJ; Santaengracia, JJ. Redefining computational thinking: A holistic framework and its implications for K-12 education. Education and Information Technologies; 2025; [DOI: https://dx.doi.org/10.1007/s10639-024-13297-4]
Patton, M. Q. (2002). Two decades of developments in qualitative inquiry. Qualitative Social Work, 1(3), 261–283. https://doi.org/10.1177/1473325002001003636
Rahimi, AR. Beyond digital competence and language teaching skills: The bi-level factors associated with EFL teachers’ 21st-century digital competence to cultivate 21st-century digital skills. Education and Information Technologies; 2023; 29,
Rahimi, AR; Mosalli, Z. The role of 21-century digital competence in shaping pre-service language teachers’ 21-century digital skills: The partial least square modeling Approach (PLS-SEM). Journal of Computers in Education; 2024; [DOI: https://dx.doi.org/10.1007/s40692-023-00307-6]
Rahimi, AR; Sevilla-Pavón, A. Scaling up computational thinking skills in computer-assisted language learning (CTsCALL) and its fitness with language learners’ intentions to use virtual exchange: A bi-symmetric approach. Computers in Human Behavior Reports; 2025; 17, [DOI: https://dx.doi.org/10.1016/j.chbr.2025.100607] 100607.
Rahimi, A. R., Sevilla-Pavón, A. (2025b). The role of design thinking skills in artificial-intelligence language learning (DEAILL) in shaping language learners’ L2 grit: The mediator and moderator role of artificial intelligence L2 motivational self-system. Computer Assisted Language Learning, 1–49. https://doi.org/10.1080/09588221.2025.2477710
Rich, KM; Yadav, A; Larimore, RA. Teacher implementation profiles for integrating computational thinking into elementary mathematics and science instruction. Education and Information Technologies; 2020; 25,
Rottenhofer, M., Kuka, L., Leitner, S., & Sabitzer, S. (2022). Using computational thinking to facilitate language learning: A survey of students’ strategy use in Austrian secondary schools. IAFOR Journal of Education, 10(2), 51–70. https://doi.org/10.22492/ije.10.2.03
Şahin, E; Sarı, U; Şen, ÖF. STEM professional development program for gifted education teachers: STEM lesson plan design competence, self-efficacy, computational thinking and entrepreneurial skills. Thinking Skills and Creativity; 2024; 51, [DOI: https://dx.doi.org/10.1016/j.tsc.2023.101439] 101439.
Slavec, A; Drnovšek, M. A perspective on scale development in entrepreneurship research. Economic and Business Review; 2012; [DOI: https://dx.doi.org/10.15458/2335-4216.1203]
Song, D; Hong, H; Oh, EY. Applying computational analysis of novice learners’ computer programming patterns to reveal self-regulated learning, computational thinking, and learning performance. Computers in Human Behavior; 2021; 120, [DOI: https://dx.doi.org/10.1016/j.chb.2021.106746] 106746.
Su, J; Yang, W. A systematic review of integrating computational thinking in early childhood education. Computers and Education Open; 2023; 4, [DOI: https://dx.doi.org/10.1016/j.caeo.2023.100122] 100122.
Tang, X; Yin, Y; Lin, Q; Hadad, R; Zhai, X. Assessing computational thinking: A systematic review of empirical studies. Computers & Education; 2020; 148, [DOI: https://dx.doi.org/10.1016/j.compedu.2019.103798] 103798.
Taylor, E. We agree, don’t we? The delphi method for health environments research. HERD: Health Environments Research Design Journal; 2019; 13,
Tikva, C; Tambouris, E. Mapping computational thinking through programming in K-12 education: A conceptual model based on a systematic literature review. Computers & Education; 2021; 162, [DOI: https://dx.doi.org/10.1016/j.compedu.2020.104083] 104083.
Tsai, M-J; Liang, J-C; Hsu, C-Y. The computational thinking scale for computer literacy education. Journal of Educational Computing Research; 2020; 59,
Ukkonen, A; Pajchel, K; Mifsud, L. Teachers’ understanding of assessing computational thinking. Computer Science Education; 2024; [DOI: https://dx.doi.org/10.1080/08993408.2024.2365566]
Ulrich Hoppe, H., & Werneburg, S. (2019). Computational thinking—more than a variant of scientific inquiry! In Computational Thinking Education (pp. 13–30). Springer Singapore. https://doi.org/10.1007/978-981-13-6528-7_2
Voon, XP; Wong, SL; Wong, L-H; Khambari, MNMd; Syed-Abdullah, SIS. Developing computational thinking competencies through constructivist argumentation learning: A problem-solving perspective. International Journal of Information and Education Technology; 2022; [DOI: https://dx.doi.org/10.18178/ijiet.2022.12.6.1650]
Wild, S; Rahn, S; Meyer, T. Factors mitigating the decline of motivation during the first academic year: A latent change score analysis. Motivation and Emotion; 2023; 48,
Willis, G. B. (2004). Cognitive interviewing: A tool for improving questionnaire design. SAGE Publications.
Wing, JM. Computational thinking. Communications of the ACM; 2006; 49,
Wu, T-T; Silitonga, LM; Murti, AT. Enhancing English writing and higher-order thinking skills through computational thinking. Computers & Education; 2024; 213, [DOI: https://dx.doi.org/10.1016/j.compedu.2024.105012] 105012.
Yu, X; Soto-Varela, R; Gutiérrez-García, MÁ. How to learn and teach a foreign language through computational thinking: Suggestions based on a systematic review. Thinking Skills and Creativity; 2024; 52, [DOI: https://dx.doi.org/10.1016/j.tsc.2024.101517] 101517.
Yuan, J; Brigandi, C; Rambo-Hernandez, K; Manley, C. Innovative ongoing support within a multifaceted computational thinking professional learning program improves teachers’ self-efficacy and classroom practices. Computers & Education; 2024; [DOI: https://dx.doi.org/10.1016/j.compedu.2024.105174]
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.