Content area
Computational thinking skill is an important skill individuals should acquire to meet the requirements of the digital age. The aim of the study is to predict the computational thinking skills of middle school students through ANFIS approach, which is an adaptive neural network-based fuzzy logic. Students’ computational thinking skill scores were predicted by creating a model based on grade level and academic achievement variables. Grade level and academic achievement served as the model’s input variables, and computational thinking skill scores served as the model’s output variable. Data were collected using personal information form and computational thinking scale. A comparison was made between students’ real and artificial computational thinking skill scores using statistical methods. In the study, a strong and favorable association between the artificial scores produced using the ANFIS technique and the actual scores was discovered. Furthermore, there was no statistically significant difference between the real and artificial scores for computational thinking skills. These results indicate that the ANFIS approach is a suitable alternative analysis method for predicting students’ computational thinking skills. The study provides a good example in the field of education where artificial intelligence can be used to predict students’ educational characteristics.
Introduction
With rapidly developing technology, computational thinking (CT) has emerged as a critical skill to meet the requirements of the digital age. CT encompasses skills such as problem solving, algorithmic thinking and abstraction and enables individuals to design solutions with technological tools1,2. This globally recognized skill has been integrated into K-12 curricula in many countries and will be included in assessment frameworks such as PISA from 20213,4. As a new field, CT poses some challenges in terms of curriculum integration. However, international organizations such as the European Commission, UNESCO and OECD have emphasized the importance of CT and stated that digital literacy should be developed. These organizations have positioned CT as an essential component of compulsory education and a driving force for curricular innovations3, 4–5. Due to its critical role in preparing individuals for the demands of the digital age, the limited presence of CT in the literature reveals the need for further research in this area. Moreover, the complex and multidimensional nature of CT skills makes them difficult to assess and predict with traditional statistical methods6,7.
This study proposes Adaptive Neural Fuzzy Inference System (ANFIS) as an innovative AI-based method to predict CT skills of middle school students. ANFIS combines the interpretive power of fuzzy logic and the data-driven learning capabilities of artificial neural networks to outperform traditional methods in modeling nonlinear relationships8,9. Looking at the comparisons in the literature evaluating the effectiveness of the ANFIS method and estimating computational thinking skills, it is seen that ANFIS offers advantages in empirical studies compared to traditional statistical methods. For example, although Korkmaz et al.10 and Yıldız Durak and Sarıtepeci11 obtained significant results using ANOVA and structural equation modeling, respectively, in predicting computational thinking skills, these methods are based on linear assumptions and are limited in modeling complex relationships. In contrast, the structure of ANFIS, which combines fuzzy logic and neural networks, handles uncertainties and non-linear relationships in training data more effectively. Taylan and Karagözoğlu12 stated that the ANFIS method produces clear numerical results in predicting academic performance, offers alternative solutions for imprecise data, and is a more natural way to interpret students’ results. The correlation analysis findings in Roman-Gonzalez et al.‘s13 study on computational thinking skills also reveal that the prediction accuracy of ANFIS is comparable to the literature, but it offers a more adaptive model with its automatic rule generation feature. These findings demonstrate the superiority of ANFIS in modeling complex data sets in the field of education, its effectiveness in predicting CT skills and its unique contribution.
In this study, grade level and academic achievement were selected as input variables; these variables are prominent in the literature due to their relationship with CT skills11,13,14. These variables are linked to cognitive components of CT such as problem solving and algorithmic thinking, developmental stages and mathematical skills15, 16–17. Using ANFIS, this study aims to demonstrate its potential to model complex patterns in educational data and contribute to pedagogical strategies for developing 21st century skills.
Computational thinking skills
Computational thinking (CT) is a multifaceted way of thinking that encompasses skills such as problem solving, algorithmic thinking, creative thinking, critical thinking, communication and collaborative learning18. These skills enable individuals to solve complex problems systematically and use technology effectively19. Below, the main components of CT are briefly explained:
Algorithmic thinking
Algorithmic thinking involves understanding problems, determining solution steps and breaking down complex problems into sub-problems20. In education, it strengthens individuals’ ability to plan and strategize21.
Creative thinking
Creativity is the ability to generate new ideas and develop different approaches to problems22. In CT, creative thinking enriches problem solving processes23. Algorithmic thinking involves understanding problems, identifying solution steps, and breaking complex problems into sub-problems20. In education, it strengthens individuals’ ability to plan and strategize21.
Critical thinking
Critical thinking enables generating solutions to problems with a questioning approach and drawing reliable conclusions24. It has a complementary relationship with CT25.
Problem solving
Problem solving is the process of overcoming obstacles to achieve goals and is a fundamental component of CT26,27.
Communication skills
Effective communication enables individuals to clearly express their ideas and establish meaningful connections with others28.
Cooperative learning
Cooperative learning encourages group work towards common goals and plays an important role in developing CT skills29.
CT skills are among the 21st century skills and their development from an early age in education increases students’ analytical and innovative thinking capacities30. This study aims to contribute to the more effective development of these skills in educational systems by predicting CT skills using the ANFIS model.
CT skills have been examined in the literature in terms of different variables. Czerkawski and Lyman31 emphasized the potential of CT as an interdisciplinary skill in higher education, but revealed that more research and interdisciplinary cooperation are needed for the dissemination of this skill; Levi Weese32 and Swaid33 emphasized that applied STEM activities can increase students’ CT self-efficacy. Korkmaz et al.10 reported that CT skills of university students vary depending on variables such as grade level, school type, age and gender. Atmatzidou and Demetriadis15 reported that robotics and coding activities increased CT skills; Sarıtepeci34 reported that daily technology use was effective on the level of CT skills and that most of the female students performed better than males in the CT sub-dimension. Yıldız Durak and Sarıtepeci11 stated that students’ success in mathematics positively affects their computational thinking skill levels. These studies reveal the multifaceted nature of CT and its relationship with different variables.
The effect of grade level on computational thinking skills
The effect of grade level on CT skills is supported by various studies in the literature. Grover and Pea6 reported that students in higher grades K-12 were more successful in complex CT skills such as abstraction and algorithmic thinking. Kalelioglu and Gülbahar35 found that 7th and 8th grade students showed superior performance in problem solving and algorithm generation skills using Scratch compared to lower grades. Bers et al.36 emphasized that grade level affects the development of CT components such as sequencing and modularity from early childhood. Weintrop et al.17 stated that students at advanced grade levels are more competent in CT skills such as data analysis and modeling. Atmatzidou and Demetriadis15 emphasized that grade level has an impact on learning outcomes; Yıldız Durak and Sarıtepeci11 emphasized that CT skills generally increase with grade level. Tran37 showed that 3rd grade students’ CT skills were at a basic level, but their algorithmic thinking and debugging skills accelerated as the grade level progressed. Angeli and Giannakos38 emphasized that 8th grade students were more successful in complex CT skills such as decomposition and generalization and emphasized the importance of learning activities appropriate to the grade level. These studies show that grade level plays a critical role in the development of CT skills.
The effect of academic achievement on computational thinking skills
CT shares common components such as mathematical problem solving and algorithmic thinking16,17. Fidelis Costa et al.39 found that the integration of CT into mathematics instruction strengthens problem solving skills. Bih et al.40 showed that structured learning activities simultaneously enhance CT and mathematical thinking. Yıldız Durak and Sarıtepeci11 and Roman-Gonzalez et al.13 reported a positive correlation between math achievement and CT skills (R2 = 0.45, p < 0.05; r = 0.40, p < 0.01, respectively). These findings support that academic achievement, especially in mathematics, is a strong predictor of CT skills. In addition, Yıldız Durak and Sarıtepeci11 emphasized that CT skills have a significant relationship with thinking styles. Roman-Gonzalez et al.13 stated that CT skills show a positive relationship with cognitive maturity, especially with grade level. Atmatzidou and Demetriadis15 stated that grade level plays a critical role in the development of CT skills because these skills are compatible with the stages of cognitive development. Academic achievement exhibits a conceptual overlap with CT components such as problem solving and algorithmic thinking and stands out as an important determinant in the development of these skills17.
These variables were chosen because of their compatibility with the structural requirements of the ANFIS model due to their measurable and categorical nature and their well-documented relationships in educational performance models. Other potential variables (e.g., digital literacy, technology usage time) were considered but excluded in this study due to limitations of the dataset and compatibility with the fuzzy logic framework of the model.
Neuro-fuzzy systems
Artificial intelligence (AI) technologies generally encompass a variety of methods, including expert systems, fuzzy logic, artificial neural networks, machine learning, and genetic algorithms41. Since the current study employs the ANFIS, which integrates fuzzy logic and artificial neural networks, explanations of both components are provided below.
Unlike classical logic, fuzzy logic produces approximate rather than exact results41. It is designed to address complex and ambiguous situations commonly encountered in everyday life by offering simplified solutions. Fuzzy logic enables the analysis of systems that operate under uncertain conditions and are not clearly defined. It achieves this by using a limited number of membership functions to represent a broad problem space, resulting in a more compact rule base and faster computation. Moreover, when dealing with high-complexity problems, classical logic methods can be both challenging and costly. In contrast, fuzzy logic provides more cost-effective solutions and enables more efficient analysis of such problems.
Artificial neural networks (ANNs), on the other hand, possess the capability to learn and generalize from data without relying on traditional rule-based programming. These networks can not only learn autonomously but also store information and identify relationships between data points. They are particularly effective in handling problems characterized by uncertainty and incomplete information. ANNs offer modelers an automated means of identifying relevant variables, thereby simplifying the modeling process. This relieves decision-makers from having to manually select variables, determine the optimal model structure, or adhere to the parametric assumptions required by traditional statistical approaches. Furthermore, due to their capacity to process large datasets and detect complex patterns, artificial neural networks tend to outperform traditional methods in educational contexts.
ANFIS is a hybrid computational model that integrates two flexible approaches: ANNs and fuzzy logic8. Developed by Jyh-Shing Jang42ANFIS is a learning algorithm designed to map inputs to desired outputs by linking information through interconnected neural network processing units guided by fuzzy logic rules43. It is widely favored for its distinctive features, including advanced learning capabilities, parallel processing, symbolic representation of structured information, and rapid integration with various control design methods. In the ANFIS approach, a suitable fuzzy rule set is derived from the input and output datasets, and the application parameters are systematically adjusted through an adaptive network structure44.
ANFIS combines the qualitative reasoning capacity of fuzzy logic—capable of representing human-like decision-making—and the data-driven learning strength of artificial neural networks. One key limitation of fuzzy logic is the subjectivity and increased error rate stemming from the complexity of defining membership functions and translating human reasoning into rule-based systems9. Artificial neural networks address this limitation by enabling the automatic optimization of membership functions, thereby reducing the error rate and enhancing the robustness of fuzzy logic systems8. By merging the strengths of both methods, ANFIS minimizes subjectivity in rule creation and improves the model’s adaptability and interpretability.
Recent literature has increasingly applied ANFIS and its derivatives to complex system identification tasks, offering alternative approaches to characterize dynamic system behavior. However, the current study differs in that it applies ANFIS to educational data, specifically using the pedagogical construct of computational thinking as the outcome variable. Thus, it bridges artificial intelligence-based analysis with an educational focus.
Traditional statistical methods often rely on assumptions such as linear relationships and normally distributed data. In contrast, ANFIS is capable of modeling complex, nonlinear relationships by combining fuzzy logic’s interpretive strengths with neural networks’ learning capabilities. This hybrid structure allows for flexible and accurate modeling, even with incomplete or irregular data-conditions commonly encountered in educational research, where student performance is shaped by the interplay of numerous factors. Consequently, ANFIS presents an innovative and adaptive alternative to conventional prediction techniques.
The present study
This study contributes to multivariate modeling approaches in educational research by demonstrating the applicability of ANFIS—an alternative artificial intelligence-based analysis method—in predicting students’ computational thinking skills. In this context, the findings align with both national and international research that explores the effects of multiple variables on language acquisition and cognitive skill development. For example, Zhang and Lu’s45 study titled “What can multi-factors contribute to Chinese EFL learners’ implicit L2 knowledge?” exemplifies the use of a multivariate approach to examine how individual differences influence learning outcomes. Similarly, the present study employs a multifactorial analysis framework to model a complex skill—computational thinking—based on variables such as grade level and academic achievement.
Göktepe Körpeoğlu and Göktepe Yıldız41 implemented an AI-based approach using ANFIS to evaluate students’ STEM attitudes, finding no significant difference between actual and ANFIS-predicted STEM scores. However, grade level and academic achievement were shown to have statistically significant effects on students’ attitudes. Stojanovic et al.46 utilized the ANFIS method to assess students’ mathematics knowledge following distance education, identifying key factors that influenced academic performance. Taylan and Karagözoğlu12 developed a systematic approach for designing a neural network-based fuzzy inference system to evaluate academic achievement, concluding that ANFIS results are comparable in accuracy to those of traditional statistical methods, while offering more intuitive interpretations. In another application, Daneshvar et al.47 used an intelligent ANFIS model to evaluate teacher performance in academic e-learning systems. Mehdi and Nachouki48 built a predictive and explanatory model using ANFIS to estimate the grade point average (GPA) of information technology students at Ajman University.
Despite the growing use of ANFIS in educational research, no studies were found that specifically estimated computational thinking skills using this method. Therefore, this study aims to fill that gap by addressing the following research questions, thereby evaluating the applicability of the ANFIS method in educational contexts and offering an alternative approach for assessing students’ computational thinking skills:
Can the ANFIS technique predict students’ computational thinking abilities?
Do the actual scores of students’ computational thinking skills differ according to grade level and academic achievement variables?
Does the artificial intelligence score produced by the ANFIS approach differ from the real scores of students’ computational thinking abilities?
Methods
Research design
The descriptive survey design as one of the quantitative research methods was used to perform the study. According to Büyüköztürk49descriptive survey design is a research method used to gather data from a large number of universes or samples in order to make a general judgment.
Sample of the study
The sample of the study was selected using a combination of convenience and stratified sampling methods. While convenience sampling was initially employed to identify accessible participants, stratified sampling was used to enhance the representativeness of the sample. Stratified sampling involves dividing the population into subgroups—or strata—based on specific characteristics (e.g., grade level, academic achievement, gender), and selecting proportional or disproportionate samples from each stratum50. This approach is effective for capturing nuanced trends and ensuring subgroup representation in heterogeneous populations11. In this context, stratified sampling was utilized to mitigate the limitations of convenience sampling, thereby increasing the model’s generalizability and capturing differences across subgroups such as gender and geographic region.
The study was conducted with a total of 330 middle school students (178 females and 152 males) enrolled in grades 5 through 8 at various schools located in Istanbul and the Lüleburgaz district of Kırklareli province, Turkey. The sample consisted of 61 fifth-grade, 107 sixth-grade, 88 seventh-grade, and 74 eighth-grade students. Data were collected during the spring term of the 2022–2023 academic year.
Prior to the commencement of the study, ethical approval was obtained from the Biruni University Ethics Committee (Protocol Code: 2024-BİAEK/03–22), and necessary permissions were secured from each participating school. Participation in the study was voluntary, and informed consent was obtained from all participants. Personal information was kept strictly confidential and used solely for research purposes. All procedures and methods were conducted in accordance with relevant ethical guidelines and regulations.
Data collection tools
The study employed the demographic information form and the computational thinking scale to collect data. The data collection tools are explained below.
Computational thinking scale
The Computational Thinking Scale (CTS) developed by Korkmaz et al.51 was used in this study to assess the computational thinking skill levels of middle school students. The scale consists of 22 items rated on a 5-point Likert scale and encompasses five dimensions: creativity, algorithmic thinking, collaboration, critical thinking, and problem solving. The problem-solving dimension includes six items. The original study reported Cronbach’s alpha reliability coefficients for the scale’s dimensions as follows: 0.843 (creativity), 0.869 (algorithmic thinking), 0.865 (collaboration), 0.784 (critical thinking), 0.727 (problem solving), and 0.809 for the entire scale.
In the current study, the internal consistency reliability coefficients were recalculated and found to be 0.560 for creativity, 0.692 for algorithmic thinking, 0.776 for collaboration, 0.731 for critical thinking, 0.718 for problem solving, and 0.846 for the overall scale.
The minimum and maximum scores attainable on the scale are 22 and 110, respectively. For the purpose of this study, score ranges were interpreted as follows: 22–50 indicates low computational thinking skills, 51–80 indicates moderate, and 81–110 indicates high skill levels. The table presenting students’ actual computational thinking scores alongside the artificial scores predicted by the ANFIS model is provided in the appendix.
Demographic information form
The demographic information form included questions about students’ background characteristics such as grade level and academic achievement. In this study, only the data related to grade level and academic achievement were utilized, as these variables are compatible with fuzzy logic-based modeling. Other demographic variables were excluded from the analysis due to their incompatibility with the fuzzy logic framework.
Academic achievement was operationalized using the students’ grade point averages (GPA) from their mathematics course in the previous semester. As the study was conducted in 2023, the GPA data reflect student performance from the 2022 academic year. Students reported their GPAs on a 5-point grading scale, with scores ranging from 1 (lowest) to 5 (highest). The distribution of students’ GPAs was as follows: 1 student with a GPA of 1, 20 students with a GPA of 2, 76 students with a GPA of 3, 65 students with a GPA of 4, and 168 students with a GPA of 5.
Data analysis
The fuzzy system of data analysis was performed in MATLAB software via Fuzzy Logic Toolbox R2021b. Input and output values are required for ANFIS modelling, which is a combination of fuzzy logic and ANN52. In this study, students’ grade level and academic achievement were used as input variables and computational thinking skill scores were used as output variables. As a result, a set of rules modelling the data behavior was created with the ANFIS method. In the ANFIS model, unlike the fuzzy logic approach, the rules are created by interpreting the training data through artificial intelligence. These rules are automatically generated by the Toolboxes in MATLAB.
Membership functions were chosen as Gaussian type in accordance with the structure of the dataset (grade level: grades 5–8; academic achievement: 1–5 grade system). Four functions were used for grade level and five for academic achievement. This choice is based on the education system in Turkey and previous ANFIS studies9,41.
In this study, the number of training epochs was set to 100. This number is considered as the point at which the error rate of the model decreases steadily and the limit at which overfitting is not observed. In the pre-tests, 50, 100 and 150 cycles were tested and it was observed that at 100 cycles, both the learning and generalizability performance of the model stabilized. At lower cycles the accuracy of the model decreased, while at higher cycles the risk of overfitting the training data increased.
Statistical analyses were performed for the second and third research questions using SPSS 20.0 software. The differences of the actual scores of students’ computational thinking skills according to grade level and academic achievement variables were tested with One-way Anova.
The students’ computational thinking abilities were scored using real and artificial intelligence, and the t-test and Pearson correlation were used to compare the two sets of scores.
Results
Results for the first research question
In the first research question, the students’ computational thinking skills were estimated with ANFIS approach through MATLAB R2021b Fuzzy Logic Toolbox. 230 (or 70%) of the dataset of 330 middle school students’ responses were used to train the model and the remaining 100 datasets (or 30%) were used to test the model. Randomization was used to determine which datasets were used to test the model53,54. Randomization ensures that the dataset is randomly divided into training and test sets. This aims to ensure that both sets represent the characteristics of the main dataset (distribution, variance, classification rates, etc.). Good randomization reduces bias in the data set and ensures that both the training and test sets accurately reflect the population. For example, if a particular class (e.g. positive/negative comments) is imbalanced in the dataset, randomization ensures that this imbalance is similarly distributed across both sets55. Randomization helps prevent the model from learning patterns that are specific only to the training data (overfitting). When the training and test sets are randomized, the situations that the model will encounter in the test data reflect general characteristics of the population, independent of the training data. This increases the model’s ability to generalize to real-world data56.
The input variable of the model consists of four membership functions for students’ grade level, five membership functions for academic achievement and three membership functions for computational thinking skill scores. Table 1 shows the fuzzy set parameters of input and output variables. The data obtained were obtained in line with the answers given by the students to the computational thinking scale.
Table 1. Fuzzy sets of variables.
Fuzzy sets of input parameters | Fuzzy sets of output parameters | |
|---|---|---|
Class level | Academic success | Computational thinking skills |
5. Class | Too low | Low |
6. Class | Low | Medium |
7. Class | medium | High |
8. Class | Good | |
Excellent | ||
The input parameters are class level and academic success as seen in Table 1. The output parameter is computational thinking skill.
Figure 21 visualizes the rule base of the ANFIS model and shows the effect of the input variables on the output.
The adaptive neuro-fuzzy rule-based model developed in the study is given in Fig. 1. According to the accuracy of the rules, the membership function’s area between the input and output axes is colored. The figure on the right side of Fig. 2 shows one of the results obtained by the student from the neuro-fuzzy model. The figure showing some technical features of the model built by loading test and trial data is on the left side of Fig. 2. The definitions of the membership functions for the model’s input and output variables can be found in Table 1. The number of 1000 cycles were generated and the correct prediction was finalized. Figure 3 shows the structure of the ANFIS model. Figure 4 shows the ANFIS model developed in this study.
Fig. 1 [Images not available. See PDF.]
The structure of ANFIS model.
Fig. 2 [Images not available. See PDF.]
Rule viewer of the ANFIS.
Fig. 3 [Images not available. See PDF.]
Fuzzy sets of input variables grade level and achievement level.
Fig. 4 [Images not available. See PDF.]
The developed ANFIS model.
Results for the second research question
In the second research question, it was determined whether the actual scores of students’ computational thinking skills differed according to the grade level and academic achievement variables. Grade level and academic achievement variables were analyzed using descriptive statistics and one-way Anova tests. Through these analyses, it will be determined whether grade level and academic accomplishment variables are appropriate for predicting students’ computational thinking abilities and for making predictions using the ANFIS model.
Table 2 shows the descriptive statistics results of the students’ computational thinking skill real scores according to their grade levels.
Table 2. Descriptive statistics results of computational thinking skill scores according to grade level.
5th grade | 6th grade | 7th grade | 8th grade | |||||
|---|---|---|---|---|---|---|---|---|
N | Mean | N | Mean | N | Mean | N | Mean | |
Computational thinking skill scores | 61 | 73.9 | 107 | 74.1 | 87 | 72.2 | 75 | 71.7 |
The findings in Table 2 show that students in the fifth grade had computational thinking skill scores of 73.9, those in the sixth grade had computational thinking skill scores of 74.1, those in the seventh grade had computational thinking skill scores of 72.2, and those in the eighth grade had computational thinking skill scores of 71.7. The highest mean was found for 6th grade students and the lowest mean was found for 7th grade students. In Table 4, whether the mean values show a significant difference according to the grade level is examined through One-way Anova test.
Table 4. Descriptive statistics results of computational thinking skill scores according to academic achievement level.
Too low | Low | Medium | Good | Excellent | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
N | Mean | N | Mean | N | Mean | N | Mean | N | Mean | |
Computational thinking skill scores | 1 | 82.0 | 20 | 67.5 | 76 | 70.7 | 65 | 73.7 | 168 | 74.4 |
One-way ANOVA results in Table 3 show that computational thinking skill scores of middle school students do not show a significant difference according to grade level.
Table 3. One-way ANOVA results of computational thinking skill scores according to grade level.
Sum of squares | df | Mean square | F | P | |
|---|---|---|---|---|---|
Between Groups | 353.303 | 3 | 117.768 | 2.230 | 0.085 |
Within Groups | 17215.715 | 326 | 52.809 | ||
Total | 17569.018 | 329 |
Table 4 shows the descriptive statistics results of students’ computational thinking skill real scores according to their academic achievement levels. Student grade point averages are very low between 0 and 20, low between 21 and 40, 41–60 is medium, 61–80 is good, and 81–100 is very good.
It was discovered that the computational thinking skill scores of students with very low grade point averages were (X = 82.0), the computational thinking skill scores of students with low grade point averages were (X = 67.5), the computational thinking skill scores of students with average grade point averages were (X = 70.7), the computational thinking skill scores of students with good grade point averages were (X = 73.7), and the computational thinking skill scores of students with higher grade point averages were (X = 74.4). Students with very low grade point average had the highest computational thinking skill scores and students with low grade point average had the lowest computational thinking skill scores. In Table 5, a one-way Anova test is used to determine whether there is a significant difference between the obtained mean values and academic achievement levels.
Table 5. Computational thinking skill scores by academic attainment level as determined by one-way ANOVA.
Sum of squares | df | Mean square | F | P | |
|---|---|---|---|---|---|
Between groups | 1444.650 | 4 | 361.163 | 7.280 | 0.000 |
Within groups | 16124.368 | 325 | 49.613 | ||
Total | 17569.018 | 329 |
The one-way ANOVA test results in Table 5 demonstrate a significant relationship between the computational thinking skill scores of middle school students and their degree of academic achievement (p < 0.05).
Results for the third research question
The third research question aims to determine whether there is a difference between the actual scores of the students’ computational thinking skills and the artificial intelligence scores created with the ANFIS approach. The descriptive statistics results for the real scores and artificial intelligence scores are given in Table 6.
Table 6. Descriptive statistics results for real scores and artificial intelligence scores.
Computational thinking skill scores | N | Min | Max | X | SD |
|---|---|---|---|---|---|
Real scores | 330 | 36.0 | 97.0 | 73.05 | 7.30 |
Artificial scores | 330 | 60.0 | 82.0 | 72.69 | 2.65 |
The scores of the students from the computational thinking skills scale ranged between 36 and 97 and the mean was 73.05. When Table 7 is analyzed, it can be said that students’ computational thinking skill levels are at an intermediate level (22–50 low, 51–80 medium, 81–110 high). The computational thinking skill artificial intelligence scores of the students created with the ANFIS approach vary between 60 and 82 and their average is 72.69. It can be said that middle school students’ computational thinking skills estimated with the ANFIS approach are also at a medium level (22–50 low, 51–80 medium, 81–110 high). As shown in Table 7, the actual mean scores of students’ computational thinking skills and the mean scores predicted by the ANFIS-based artificial intelligence model are remarkably similar.
Table 7. Paired samples t-test results of computational thinking skill scores.
Computational thinking skill scores | N | Mean | SD | df | t | P |
|---|---|---|---|---|---|---|
Real Scores | 330 | 73.05 | 7.30 | 329 | 0.948 | 0.344 |
Artificial scores | 330 | 72.69 | 2.65 |
The difference between the students’ actual computational thinking scores and the artificial intelligence scores produced using the ANFIS approach is displayed in Table 7.
When Table 7 is analyzed, no significant difference was found between the students’ computational thinking skill real scores and the artificial intelligence scores obtained with the ANFIS approach (p > 0.05). This finding showed that there was no difference between the real student computational thinking scores and the artificial intelligence scores predicted by the ANFIS approach. In other words, the ANFIS approach predicts results consistent with the actual scores of students’ computational thinking skills.
Chi-square test and correlation analyses were performed to examine the relationship between students’ actual scores of computational thinking skills and artificial intelligence scores generated by ANFIS approach (Tables 8 and 9).
Table 8. Chi-square tests.
Value | Df | Asymp. Sig. (2-sided) | |
|---|---|---|---|
Pearson Chi-Square | 850.205 | 646 | 0.000 |
Likelihood Ratio | 490.964 | 646 | 1.000 |
Linear-by-Linear | |||
Association | 41.857 | 1 | 0.000 |
N | 330 |
Table 9. Correlation analysis between students’ real scores and artificial intelligence scores.
Real scores | Artificial scores | ||
|---|---|---|---|
Real scores | Pearson Correlation | 1 | 0.357 |
Sig. (2-tailed) | 0.000 | ||
N | 330 | 330 | |
Artificial scores | Pearson Correlation | 0.357 | 1 |
Sig. (2-tailed) | 0.000 | ||
N | 330 | 330 |
Table 8 demonstrates a significant correlation between the actual computational thinking scores of the students and the artificial intelligence scores produced using the ANFIS approach.
Analysis of Table 9 reveals a positive correlation between the actual computational thinking scores of the students and the artificial intelligence scores produced by the ANFIS approach (r = 0.357; p = 0.000 < 0.05). This result shows that there is a moderate correlation between the real scores of the students and the artificial intelligence scores generated by the ANFIS approach.
Discussion and conclusion
In this study, an ANFIS model was developed to predict the CT skills of middle school students. Real data were obtained from students’ responses to a demographic information form and the Computational Thinking Scale. Once data collection was completed, the input and output variables were entered into the system. Grade level and academic achievement were used as the model’s inputs, while computational thinking skill scores served as the output variable. Using these two inputs, the model was trained to generate predicted CT scores. For the first research question, the ANFIS approach was employed to estimate CT scores by utilizing grade level and academic achievement. A fuzzy rule set was created using the input-output dataset, and the centroid method was selected for defuzzification, which identifies the center of gravity of the area under the curve. To ensure accurate predictions, the model was trained over 1,000 iterations, after which students’ CT scores were estimated. A review of the relevant literature indicated that no previous study had applied ANFIS to estimate students’ computational thinking skills, which underscores the originality of this research.
The second research question involved conducting statistical analyses to determine whether students’ actual CT scores varied by grade level and academic achievement. To ensure consistency, the same variables used in the ANFIS model were analyzed statistically. Although developmental theories such as those of Piaget and Vygotsky suggest that cognitive skills improve with grade level, the present study found no statistically significant difference in CT scores across grades. This outcome may be explained by the limited geographical scope of the sample (Istanbul and Lüleburgaz), the absence of differentiated CT instruction across grade levels, or the strong influence of individual variables that may have overshadowed the effect of grade. Additionally, the grade level variable may have exhibited less fuzziness compared to academic achievement, which could have limited its influence in the model. Findings from the literature support both the presence and absence of grade-level effects on CT skills. For example, studies by Çoban57 and Atmatzidou and Demetriadis15 found no significant differences in CT scores based on grade level, which is consistent with the current study. Conversely, other studies reported divergent findings. Korkmaz et al.10 found a decline in university students’ CT skills as their grade levels increased, whereas Gonzalez et al.58, Bilge Kunduz59and others observed an increase in CT skills with higher grade levels. Oluk14 found a negative relationship, where CT skills decreased as grade level increased.
On the other hand, academic achievement showed a significant relationship with students’ CT scores. This aligns with previous research suggesting a positive correlation between students’ performance in mathematics and their computational thinking abilities. Oluk14 found a moderate, positive relationship between CT skills and mathematics achievement, and similar associations were reported by Liu and Wang16Barcelos and Silveira60 and Weintrop et al.17, who emphasized the conceptual overlap between computational thinking and mathematical problem solving.
For the third research question, descriptive statistics indicated that students’ CT skill levels were generally high based on their actual scores from the Computational Thinking Scale. This finding is supported by previous studies, including Korkmaz et al.10, who reported high levels of CT among students, and Sarıtepeci34who found that 72.95% of tenth-grade students had moderate CT perception levels and 27.05% had high levels. Çakır61 also noted that seventh-grade students exhibited above-average CT skills. These results may be attributed to students’ increased familiarity with 21st-century skills and their strong orientation toward technology use62,63.
The comparison between actual CT scores and those predicted by the ANFIS model revealed a high degree of similarity. Statistical tests showed no significant difference between the real and predicted scores, and a strong positive correlation was observed. These findings demonstrate that ANFIS is a reliable and valid method for estimating students’ computational thinking skills. When methods such as Random Forest and Long Short-Term Memory (LSTM) networks, which have come to the fore in modelling educational data in recent years, are also taken into consideration, the strengths of ANFIS become more clearly understood. For example, while Random Forest models offer high prediction accuracy and robust feature selection, they have limitations in terms of interpretability64, 65, 66–67. LSTM models, on the other hand, can perform better on dynamic datasets with strong temporal dependencies, but may not always be feasible in training contexts due to high computational costs and large data requirements68. In contrast, ANFIS integrates expert knowledge into rules through its fuzzy rule-based structure while also utilising the learning capabilities of artificial neural networks, thereby enabling teachers and educators to interpret results more easily. Furthermore, ANFIS demonstrates balanced performance between accuracy and flexibility in small and heterogeneous data sets, which are frequently encountered in educational research. Numerous studies have also demonstrated that ANFIS outperforms traditional methods such as artificial neural networks (ANNs), support vector machines (SVMs), multiple linear regression, and independent fuzzy logic systems in terms of prediction accuracy and robustness9,69, 70–71. Therefore, the ANFIS model used in our study offers a significant advantage in terms of practicality and interpretability in an educational context, not only compared to traditional statistical methods but also compared to the most recent machine learning approaches. However, the applicability of the ANFIS model across various educational contexts depends on several factors, including data diversity and technological infrastructure. In countries with high digital literacy and early ICT integration in curricula—such as Finland and Singapore—model accuracy may be enhanced72. In contrast, limited access to data in developing countries may constrain generalizability73. Effective adaptation of the ANFIS model requires calibration with local datasets and the inclusion of additional variables, such as frequency of technology use. Cultural and pedagogical differences also affect the rule sets and model performance. For instance, although Turkey’s 2018 curriculum emphasizes ICT skills, regional disparities hinder consistent implementation. In Asia, an emphasis on algorithmic thinking supports structured data integration74while European systems promote interdisciplinary approaches that diversify input variables72.
To implement the ANFIS model for real-time student performance monitoring, several mechanisms must be established, including technological infrastructure, efficient data collection systems, intuitive user interfaces, and pedagogical integration. The model could be embedded in a web-based platform, allowing educators to input student data—such as grade level and academic achievement—and instantly receive predicted CT scores9. A cloud-based system using MATLAB’s Fuzzy Logic Toolbox could be employed, or open-source tools like scikit-fuzzy could be utilized to reduce implementation costs75. Data collection may be automated through integration with school management systems such as Turkey’s e-Okul or through digital tasks like Scratch projects76. This framework would enable scalable and adaptive use of the ANFIS model within diverse educational environments.
Limitations and recommendations
This study has several limitations that should be acknowledged. The sample consists solely of secondary school students in Turkey, which limits the generalizability of the findings. Educational systems, pedagogical practices, and technological infrastructures vary significantly across countries; therefore, the model’s predictive accuracy in other contexts may depend on the structural characteristics of those educational environments. It is recommended that the model be tested in diverse settings to assess its robustness. More accurate predictions are likely to be achieved in systems with higher levels of digital literacy. Additionally, as the study employed statistical analysis and the ANFIS model—a data-driven, non-causal method—causal inferences could not be drawn. Future research may consider longitudinal designs to better understand developmental changes in CT skills over time. Data collection in this study relied on a single quantitative tool, the Computational Thinking Scale. Future studies may enrich the data and broaden the scope by incorporating qualitative methods such as interviews, open-ended questions, observations, or student self-assessment tools.
In the model of the study, only grade level and academic achievement were used as input variables. Although these variables were chosen due to their strong association in the literature and the structural compatibility of the dataset, other potential variables (e.g. digital literacy, duration of technology use, socio-economic status) were excluded from the model. Data on duration of technology use were not included as they are based on students’ self-reports and lack a standardized measurement format. In future studies, the inclusion of contextual factors such as digital literacy, socio-emotional learning indicators and socio-economic status may increase the predictive power and reliability of the model. Variables such as gender are also difficult to define as fuzzy membership functions since they are continuous and ungradable. Moreover, since the grade level variable lacks the fuzziness offered by academic achievement, future research with larger and more diverse samples may yield different or more nuanced results. Given the limited number of studies using fuzzy logic or AI-based methods in education, more research is needed to explore the potential of these approaches.
Although the flexibility of the ANFIS methodology and its ability to model non-linear relationships were effectively utilized to predict students’ CT skills, the technique also has inherent limitations. First, the performance of ANFIS is highly dependent on high-quality, well-structured data; noisy or incomplete datasets can degrade the model’s accuracy9. While the dataset in this study was carefully curated, broader applications may face challenges with heterogeneity or data sparsity. Second, the rule-based structure of ANFIS can be computationally demanding, which may hinder real-time use, particularly with large-scale datasets. Third, the selection and optimization of membership functions—crucial for model accuracy—often rely on subjective decisions or assumptions based on the specific dataset77. Finally, limiting the model to only two input variables, grade level and academic achievement, may omit other critical factors such as socio-economic status or digital access that could further refine predictions. These limitations may be addressed in future studies through the use of more diverse and representative datasets, the inclusion of additional relevant variables, and comparative evaluation with alternative AI methodologies.
Enhancing the interpretability of ANFIS predictions for non-technical stakeholders can significantly increase its practical value in educational settings. For example, if a student’s CT skill is predicted at an “intermediate” level, this may indicate that while the student possesses basic problem-solving abilities, they may require additional support in areas such as abstraction6. Such insights can help teachers tailor instructional strategies and enable policymakers to allocate resources or redesign curricula to better support student needs72. Inspired by Huang et al.78, efforts can be made to present ANFIS rule sets in simplified visual formats or natural language summaries. For instance, a rule such as “If the grade level is low and academic achievement is average, then computational thinking skills are moderate” can serve as actionable guidance for educators. However, due to the inherent technical complexity of ANFIS, it is recommended that models be adapted to local educational contexts and supported with brief, accessible training resources for practitioners to effectively interpret and use the outputs.
Author contributions
The entire text of the article was realised by a single author.
Data availability
The datasets generated during the current study are available from the corresponding author upon reasonable request.
Declarations
Competing interests
The authors declare no competing interests.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1. Modeste, S., Broley, L., Buteau, C., Rafalsaka, M. & Stephens, M. Computational thinking and mathematics. In Handbook of Digital (Curriculum) Resources in Mathematics Education (eds Pepin, B., Gueudet, G. & Choppin, J.) (Springer, in press).
2. Wing, JM. Computational thinking. Commun. ACM; 2006; 49,
3. Dagienė, V., Jevsikova, T. & Stupurienė, G. Introducing informatics in primary education: Curriculum and teachers’ perspectives. In S. Pozdnyakov & V. Dagiene (Eds.), ISSEP 2019: Informatics in Schools. New Ideas in School Informatics. Lecture Notes in Computer Science(Vol. 11913, pp. 83–94). Springer. (2019). https://doi.org/10.1007/978-3-030-33759-9_7
4. OECD. PISA 2021 mathematics framework (draft). (2018). https://www.oecd.org/pisa/sitedocument/PISA-2021-mathematics-framework.pdf
5. United Nations Educational, Scientific and Cultural Organisation. Education for sustainable development for 2030 Toolbox. UNESCO. (2021). Retrieved from https://en.unesco.org/themes/education-sustainable-development/toolbox
6. Grover, S; Pea, R. Computational thinking in K–12: A review of the state of the field. Educational Researcher; 2013; 42,
7. Labusch, A., Eickelmann, B. & Vennemann, M. Computational thinking processes and their congruence with problem-solving and information processing. Comput. Think. Educ. 65–78 (2019).
8. Suparta, W. & Alhasa, K. M. Modeling of Tropospheric Delays Using ANFIS (Springer Briefs in Meterology, Springer International Publishing AG Switzerland, 2016).
9. Jang, JS. ANFIS: adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man. Cybernetics; 1993; 23,
10. Korkmaz, Ö; Çakır, R; Özden, MY; Oluk, A; Sarıoğlu, S. Bireylerin Bilgisayarca Düşünme Becerilerinin Farklı değişkenler Açısından incelenmesi. Ondokuz Mayıs Üniversitesi Eğitim Fakültesi Dergisi; 2015; 34,
11. Yıldız Durak, H; Sarıtepeci, M. Analysis of the relation between computational thinking skills and various variables with the structural equation model. Comput. Educ.; 2018; 116, pp. 191-202. [DOI: https://dx.doi.org/10.1016/j.compedu.2017.09.004]
12. Taylan, O; Karagözoğlu, B. An adaptive neuro-fuzzy model for prediction of student’s academic performance. Comput. Ind. Eng.; 2009; 57, pp. 732-741. [DOI: https://dx.doi.org/10.1016/j.cie.2009.01.019]
13. Roman-Gonzalez, M; Perez-Gonzalez, JC; Moreno-León, J; Robles, G. Extending the Nomological network of computational thinking with non-cognitive factors. Comput. Hum. Behav.; 2018; 80, 441459. [DOI: https://dx.doi.org/10.1016/j.chb.2017.09.030]
14. Oluk, A. Öğrencilerin Bilgisayarca Düşünme Becerilerinin Mantıksal Matematiksel Zekâ Ve Matematik Akademik Başarıları Açısından Incelenmesi [Yüksek Lisans Tezi] (Amasya Üniversitesi, 2017).
15. Atmatzidou, S; Demetriadis, S. Advancing students’ computational thinking skills through educational robotics: A study on age and gender relevant differences. Robot. Auton. Syst.; 2016; 75, pp. 661-670. [DOI: https://dx.doi.org/10.1016/j.robot.2015.10.008]
16. Liu, J. & Wang, L. Computational thinking in discrete mathematics. 2010 Second International Workshop on Education Technology and Computer Science 413–416 (2010).
17. Weintrop, D. et al. Defining computational thinking for mathmatics and science classrooms. J. Sci. Educ. Technol. 127–147. https://doi.org/10.1007/s10956-015-9581-5 (2016).
18. ISTE. CT leadership toolkit. (2015). https://www.iste.org/docs/ct-documents/ctleadershipt-toolkit.pdf?sfvrsn=4.
19. Yadav, A; Hong, H; Stephenson, C. Computational thinking for all: pedagogical approaches to embedding 21st century problem solving in K-12 classrooms. TechTrends; 2016; 60,
20. Futschek, G. Algorithmic thinking: the key for Understanding computer science. Lecture Notes Comput. Sci. (including Subser. Lecture Notes Artif. Intell. Lecture Notes Bioinformatics); 2006; 4226 LNCS, pp. 159-168. [DOI: https://dx.doi.org/10.1007/11915355_15]
21. Ziatdinov, R; Musa, S. Rapid mental computation system as a tool for algorithmic thinking of elementary school students development. Eur. Researcher; 2013; 25,
22. Sternberg, RJ; Lubart, TI. The concept of creativity: prospects and paradigms. Handb. Creativity; 1999; 1, pp. 3-15.
23. Grover, S. & Pea, R. Computational Thinking: A competency whose time has come. Computer science education: Perspectives on teaching and learning in school 19 (2018).
24. Bensley, DA. Horvath, CP; Forte, JM. Rules for reasoning revisited: toward a scientific conception of critical thinking. Critical Thinking: Education in a Competitive and Globalizing World; 2011; New York, NY, Nova Science: pp. 1-45.
25. Kules, B. Computational thinking is critical thinking: connecting to university discourse, goals, and learning outcomes. Proc. Association Inform. Sci. Technol.; 2016; 53,
26. Mayer, R. E. Thinking, Problem Solving (WH Freeman/Times Books/Henry Holt & Co, 1992).
27. Wing, JM. Computational thinking and thinking about computing. Philosophical transactions of the Royal society A: mathematical. Phys. Eng. Sci.; 2008; 366,
28. Çimentepe, E. STEM Etkinliklerinin Akademik Başarı, Bilimsel Süreç Becerileri Ve Bilgisayarca Düşünme Becerilerine Etkisi [Yüksek Lisans Tezi] (Niğde Ömer Halisdemir Üniversitesi, 2019).
29. Slavin, R. E. Research on Cooperative Learning and Achievement: What We Know, What We Need To Know, Contemporary Educational Psychology 2143–69 (Allyn & Bacon, 1995).
30. Trilling, B. & Fadel, C. 21st Century Skills: Learning for Life in our Times (Wiley, 2009).
31. Czerkawski, B; Lyman, E. Exploring issues about computational thinking in higher education. Tech. Trends; 2015; 59,
32. Levi Weese, J. The impact of STEM experiences on student Self-Efficacy in computational thinking. Am. Soc. Eng. Educ. 26–35 (2016).
33. Swaid, SI. Bringing computational thinking to STEM education. Procedia Manuf.; 2015; 3, pp. 3657-3662. [DOI: https://dx.doi.org/10.1016/j.promfg.2015.07.761]
34. Sarıtepeci, M. Ortaöğretim düzeyinde bilgi-işlemsel düşünme becerisinin çeşitli değişkenler açısından incelenmesi. 5. Uluslararası Öğretim Teknolojileri ve Öğretmen Eğitimi Sempozyumu Bildiri Kitabı (s. 218–226). Dokuz Eylül Üniversitesi, İzmir (2017).
35. Kalelioglu, F; Gülbahar, Y. The effects of teaching programming via scratch on problem solving skills: A discussion from learners’ perspective. Inf. Educ.; 2014; 13,
36. Bers, MU; Flannery, L; Kazakoff, ER; Sullivan, A. Computational thinking and tinkering: exploration of an early childhood robotics curriculum. Comput. Educ.; 2014; 72, pp. 145-157. [DOI: https://dx.doi.org/10.1016/j.compedu.2013.10.020]
37. Tran, Y. Computational thinking equity in elementary classrooms: what Third-Grade students know and can do. J. Educational Comput. Res.; 2018; 57,
38. Angeli, C; Giannakos, M. Computational thinking education: issues and challenges. Comput. Hum. Behav.; 2020; 105, 106185. [DOI: https://dx.doi.org/10.1016/j.chb.2019.106185]
39. Fidelis Costa, E. J., Sampaio Campos, L. M. R. & Serey Guerrero, D. D. Computational thinking in mathematics education: A joint approach to encourage problem-solving ability. In Proceedings of 2017 IEEE Frontiers in Education Conference (FIE) (IEEE, 2017).
40. Bih, J. S., Weintrop, D., Walton, M., Elby, A. & Walkoe, J. Mutually supportive mathematics and computational thinking in a fourth-grade classroom. In The Interdisciplinarity of the Learning Sciences, 14th International Conference of the Learning Sciences (ICLS) (eds Gresalfi, M. & Horn, I. S.) Vol. 3, 1389–1396 (International Society of the Learning Sciences, 2020).
41. Göktepe Körpeoğlu, S; Göktepe Yıldız, S. Using artificial intelligence to predict students’ STEM attitudes: an adaptive neural-network-based fuzzy logic model. Int. J. Sci. Educ.; 2023; 46,
42. Jyh-Shing, RJ. ANFIS: Adaptive-Network-Based fuzzy inference system. IEEE Trans. Syst. Man Cybernetics; 1993; 23,
43. Al-Hmouz, Shen, H; Member, S. Modeling and simulation of an adaptive Neuro-Fuzzy inference system (ANFIS) for mobile learning. IEEE Trans. Learn. Technol.; 2012; 5,
44. Morova, N., Terzi, S. & Saltan, M. Adaptif Sinirsel Bulanık Tahmin Yöntemi Ile Esnek Üstyapı Performans Tahmin Modeli Geliştirilmesi (Yenilikler, 2014). & Sempozyumu, U. Bildiriler Kitabı.
45. Zhang, R; Lu, X. What can multi-factors contribute to Chinese EFL learners’ implicit L2 knowledge?. Int. Rev. Appl. Linguist. Lang. Teach.; 2023; 61,
46. Stojanovic, J., Petkovic, D., Alarifi, I. M. & Milickovic, M. Application of distance learning in mathematics through adaptive neuro-fuzzy learning method. Computers Electr. Eng.93 (2021).
47. Daneshvar, A; Homayounfar, M; Eshkiki, MF; Doshman-Ziari, E. Developing a model for performance Evaluaion of teachers in electronic education system using adaptive neuro fuzzy inference system (ANFIS). J. New. Approaches Educational Adminstration; 2021; 12,
48. Mehdi, R; Nachouki, M. A neuro-fuzzy model for predicting and analyzing student graduation performance in computing programs. Educ. Inform. Technol.; 2022; 28, pp. 2455-2484. [DOI: https://dx.doi.org/10.1007/s10639-022-11205-2]
49. Büyüköztürk, Ş. Sosyal Bilimler Için Veri Analizi El Kitabı (Pegem Akademi, 2012).
50. Cochran, W. G. Sampling Techniques 3rd edn (Wiley, 1977).
51. Korkmaz, Ö; Çakır, R; Özden, MY. Bilgisayarca Düşünme beceri düzeyleri Ölçeğinin (BDBD) Ortaokul Düzeyine Uyarlanması. Gazi Eğitim Bilimleri Dergisi; 2015; 1,
52. Kalaycı, D. PISA’da Başarılı Ülkelerin Ve Türkiye’nin Ana Dili Öğretim Programlarının Incelenmesi Ve Programların Anfıs Ile Analizi [Doktora Tezi] (Gazi Üniversitesi, Ankara, 2022).
53. Hossain, I; Choudhury, IA; Mamat, AB; Hossain, A. Predicting the colour properties of viscose knitted fabrics using soft computing approaches. J. Text. Inst.; 2017; 108,
54. Göktepe Körpeoğlu, S; Göktepe Yıldız, S. Prediction of metacognition awareness of middle school students: comparison of ANN, ANFIS and statistical techniques. Avrupa Bilim Ve Teknoloji Dergisi; 2022; 38, pp. 450-446.
55. Hastie, T., Tibshirani, R. & Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction 2nd edn (Springer, 2009).
56. Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016).
57. Çoban, E. Bilgisayarca Düşünme Becerilerinin Ölçülmesinde Alternatif Bir Yaklaşım: Performans Tabanlı Ölçüm [Yüksek Lisans Tezi] (Amasya Üniversitesi, 2021).
58. Gonzalez, M., Gonzalez, J. & Fernandez, C. Which cognitive abilities underlie computational thinking? Criterion validity of the computational thinking test. Comput. Hum. Behav. 1–14 (2017).
59. Bilge Kunduz. Bilge Kunduz 2015 Raporları. (2015). http://www.bilgekunduz.org/wp-content/uploads/2016/01/bilgekunduz-rapor-2015.pdf
60. Barcelos, T. & Silveira, I. Teaching computational thinking in ınitial series an analysis of the confluence among mathematics and computer sciences in elementary education and its implications for higher education. 2012 XXXVIII Conferencia Latinoamericana En Infermatica (CLEI) içinde (s. 1–8). Medellin, Colombia. (2012). https://doi.org/10.1109/CLEI.2012.6427135
61. Çakır, E. Ters Yüz Sınıf Uygulamalarının Fen Bilimleri 7. Sınıf Öğrencilerinin Akademik Başarı, Zihinsel Risk Alma Ve Bilgisayarca Düşünme Becerileri Üzerine Etkisi [Yüksek Lisans Tezi] (Ondokuz Mayıs Üniversitesi, 2017).
62. Dede, C. Comparing frameworks for 21st century skills. 21st Century Skills: Rethinking How Students Learn.; 2010; 20, pp. 51-76.
63. Günüç, S; Odabaşı, HF; Kuzu, A. Yüzyıl öğrenci Özelliklerinin öğretmen Adayları Tarafından tanımlanması: Bir Twitter Uygulaması. Eğitimde Kuram Ve Uygulama; 2013; 9,
64. Robinson, R; Palczewska, A; Palczewski, J; Kidley, N. Comparison of the predictive performance and interpretability of random forest and linear models on benchmark data sets. J. Chem. Inf. Model.; 2017; 57, pp. 1773-1792.1:CAS:528:DC%2BC2sXhtFOkt7bE [DOI: https://dx.doi.org/10.1021/acs.jcim.6b00753]
65. Zhao, X; Wu, Y; Cui, W. iForest: interpreting random forests via visual analytics. IEEE Transactions Visualization Comput. Graphics; 2019; 25, pp. 407-416. [DOI: https://dx.doi.org/10.1109/TVCG.2018.2864475]
66. Aria, M; Cuccurullo, C; Gnasso, A. A comparison among interpretative proposals for random forests. Mach. Learn. Appl.; 2021; 6, 100094. [DOI: https://dx.doi.org/10.1016/J.MLWA.2021.100094]
67. Haddouchi, M; Berrado, A. Forest-ORE: mining an optimal rule ensemble to interpret random forest models. Eng. Appl. Artif. Intell.; 2025; 143, 109997. [DOI: https://dx.doi.org/10.1016/j.engappai.2024.109997]
68. Fernández-Delgado, M; Cernadas, E; Barro, S. Do we need hundreds of classifiers to solve real world classification problems??. J. Mach. Learn. Res.; 2014; 15, pp. 3133-3181.3277155
69. Wei, L. A hybrid ANFIS model based on empirical mode decomposition for stock time series forecasting. Appl. Soft Comput.; 2016; 42, pp. 368-376. [DOI: https://dx.doi.org/10.1016/j.asoc.2016.01.027]
70. Bui, D et al. New hybrids of ANFIS with several optimization algorithms for flood susceptibility modeling. Water; 2018; [DOI: https://dx.doi.org/10.3390/W10091210]
71. Kanwal, S; Jiriwibhakorn, S. Advanced fault detection, classification, and localization in transmission lines: A comparative study of ANFIS, neural networks, and hybrid methods. IEEE Access.; 2024; 12, pp. 49017-49033. [DOI: https://dx.doi.org/10.1109/ACCESS.2024.3384761]
72. Bocconi, S et al. Reviewing computational thinking in compulsory education: state of play and practices from computing education. EU; 2022; [DOI: https://dx.doi.org/10.2760/126955]
73. Heintz, F., Mannila, L. & Farnqvist, T. A review of models for introducing computational thinking, computer science and computing in K-12 Education. Proceedings of the 2016 IEEE Frontiers in Education Conference 1–9 (2016).
74. Tang, X; Yin, Y; Lin, Q; Hadad, R; Zhai, X. Assessing computational thinking: A systematic review of empirical studies. Comput. Educ.; 2020; 148, 103798. [DOI: https://dx.doi.org/10.1016/j.compedu.2019.103798]
75. Pedrycz, W. & Gomide, F. Fuzzy Systems Engineering: Toward human-centric Computing (Wiley, 2007).
76. Román-González, M; Pérez-González, JC; Jiménez-Fernández, C. Which cognitive abilities underlie computational thinking? Criterion validity of the computational thinking test. Comput. Hum. Behav.; 2018; 72, pp. 678-691. [DOI: https://dx.doi.org/10.1016/j.chb.2016.08.047]
77. Zadeh, LA. Fuzzy sets. Inf. Control; 1965; 8,
78. Huang, CQ et al. XKT: toward explainable knowledge tracing model with cognitive learning theories for questions of multiple knowledge concepts. IEEE Trans. Knowl. Data Eng.; 2024; 36,
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.