Content area
Artificial Intelligence (AI) is increasingly permeating education, enhancing teaching efficiency and enriching personalized learning experiences. However, integrating AI without compromising students' independent thinking remains a challenge. Using ChatGPT-4o as an example, this study employed a within-subject experimental design to examine AI’s impact on university students' higher-order thinking skills (HOTS) and task outcomes in complex problem-solving. Based on the design problem context, the research constructed diverse task modules encompassing complex problem-solving processes, collected multidimensional data from 40 students, and analyzed using epistemic network analysis and consensus assessment techniques. The results indicate that AI slightly enhances the originality and usefulness of design sketches and significantly increases the number of HOTS used and the diversity of their interconnections. The study further categorizes students by their degree of AI influence, revealing that those more influenced by AI incorporate more personal thinking, use more HOTS, and exhibit richer interconnections among these skills. This research provides empirical evidence on AI's role in fostering HOTS and provides guidance on the integration of AI to enhance students' independent thinking in education.
Introduction
With the rapid rise of Artificial Intelligence (AI), education is experiencing unprecedented transformations. AI redefines traditional educational frameworks by analyzing vast data, including student behavior and learning styles, to provide personalized recommendations and autonomously adjust teaching modules, fostering an adaptive learning environment (Pratama et al., 2023). AI stimulates student self-reflection through real-time feedback mechanisms and enhances learning outcomes (Robert et al., 2024). This increases student engagement and enriches the educational experience. By identifying and bridging knowledge across multiple domains, AI synthesizes and delivers comprehensive insights that transcend the limitations of single-domain expertise, thereby enriching knowledge discovery and decision-making processes (Aryal et al., 2024). AI's limitless potential in education makes its practical application a vital issue in current educational research.
AI's data processing, pattern recognition, and predictive capabilities create a supportive educational environment, aiding students in tackling complex problems. Addressing complex problems has long been a crucial goal in modern education (Greiff & Fischer, 2013). In higher education, real-world learning environments often involve complex, ill-defined problems to develop students' professional practice capabilities, particularly higher-order thinking skills (HOTS) (Herrington, 2005; Raiyn & Tilchin, 2016). Complex problems involve uncertainty, multiple variables, and diverse solution paths, requiring critical, creative, and problem-solving thinking to integrate information and reason innovatively (McCormick et al., 2015). AI introduces new methods for exploring complex problems by swiftly retrieving and extracting information, enhancing students’ understanding, and offering diverse analytical perspectives (Kshirsagar et al., 2022). This accelerates information processing and encourages students to undertake a more comprehensive exploration. Additionally, AI can generate new ideas or predict potential solutions, providing references for students' problem-solving processes and further promoting multidimensional thinking.
Although AI is widely adopted, its dynamics and role in higher education are still poorly understood, with limited empirical studies on the factors driving its use among university students (Strzelecki, 2023). Particularly in the cognitive dimension, there is limited knowledge about the key drivers behind students' AI usage and its impact on their thinking, knowledge, and outcomes. Despite AI’s potential benefits, it also presents significant challenges, including privacy and security concerns, plagiarism, dissemination of inaccurate information, and over-reliance on technology. AI can diminish creativity and raise moral issues in higher education (Liang, 2023), potentially leading to personal disconnection and lack of customization in learning experiences. Therefore, understanding the motivations behind AI usage and its impact is crucial for developing strategies and regulations that maximize its advantages while mitigating misuse in higher education.
This study uses ChatGPT-4o to explore how students use AI to solve complex problems, focusing on AI' impact on HOTS and outcomes. The study highlights differences in students' core HOTS before and after AI involvement within an epistemic network framework. Using a within-group comparison experiment, students completed multiple design-based tasks. The design discipline encompasses knowledge and skills from multiple disciplines (Åman et al., 2017). At its core, Design problems are inherently complex (Maxwell et al., 2002), characterized by ambiguity, ill-defined boundaries, and nonlinear relationships (Baty, 2010). Design education is not just about creating designs but rather about teaching students how to think (Cross, 2023). While AI facilitates an efficient design process, it also poses challenges to students' independent thinking and decision-making. Using design problems as a complex context, the study observes students' application of HOTS and their design sketches, exploring the balance between AI involvement and HOTS development.
Literature review
Application of AI in education
The emergence of AI has transformed fundamental paradigms across various fields. It enables artificial agents to perform cognitive functions like decision-making and problem-solving, tasks once exclusive to humans (Krakowski et al., 2023). Free from human cognitive limitations, these agents often outperform humans in information retrieval, decision-making, and prediction (Murray et al., 2021).
The widespread use of AI holds great potential for educational technology, transforming traditional teaching methods and learning experiences (Grace et al., 2023). Ahmad et al. (2020) identified nine critical areas of AI application in education, including students' grading and evaluation, students' retention and dropout, personalized learning, students' performance prediction, sentiment analysis in education, recommendation systems in education, classroom monitoring & visual analysis and intelligent tutoring systems.
Altaleb et al. (2023) developed an AI-based adaptive learning platform that analyzes students' learning behaviors and historical performance to tailor personalized learning paths and content, enhancing learning efficiency, motivation, and engagement. Yilmaz and Yilmaz (2023) highlighted AI’s support for personalized learning, including explanations, programming examples, query support, and access to advanced resources. Rajkumar and Ramalingam (2015) created the Conversational Intelligent Tutoring System, simulating human teacher behavior to provide personalized guidance and adjust tutoring content based on student progress. Generative AI, such as ChatGPT, can generate problem representations, offer problem-solving information, and create prototype solutions, effectively promoting adaptive learning experiences (Wu et al., 2024).
The implementation of AI in education has grown rapidly in recent years. While AI enhances efficiency, it also presents challenges, including biases, plagiarism, factual inaccuracies, lack of diversity, privacy concerns, and the risk of overreliance (Al-Zahrani, 2024). Existing research mainly focuses on short-term effects, where students acquire knowledge quickly with AI assistance. However, this support often leads to surface-level learning, hindering deeper understanding and internalization. Without proper management, the negative impact of AI could undermine higher education’s mission to create and disseminate knowledge (Ivanov, 2023).
The role of HOTS in education
HOTS refers to the ability to apply knowledge, skills, and values through reasoning and reflection to solve problems, make decisions, innovate, and create (Parimaladevi & Ahmad, 2019). It encompasses cognitive abilities such as analyzing, deconstructing, reasoning, and resolving issues in complex situations. These skills encompass various thinking forms, including critical thinking, reflective thinking, argumentation, problem-solving, creativity, and metacognitive thinking (Miri et al., 2007).
HOTS is essential in higher education, promoting students’ deeper understanding of information and practical cognitive engagement, which enhances knowledge application and comprehension. Developing HOTS prepares students to tackle complex problems in academic and real-world contexts. HOTS has gained significant attention and cultivation across disciplines. In the science discipline, HOTS enhances critical and creative thinking through induction, deduction, and causal reasoning (Widyaningrum & Utaminingsih, 2021). In the mathematics discipline, HOTS is evident in abstract reasoning, problem-solving strategies, and rationality analysis, relying on logical and deductive thinking (Mafada et al., 2020). In the literature discipline, HOTS is shown through deep analysis, critical reading, and text interpretation, focusing on evaluating different perspectives (Aryani & Wahyuni, 2020). As a multidisciplinary field (Åman et al., 2017), design requires students' HOTS to be more comprehensive, as it involves not just creating designs but understanding how designers think and work (Cross, 2023). Design is centered on solving complex problems that involve multiple variables and uncertainties, with no clear solution paths (Chandrasekaran, 1990). This requires students to process extensive knowledge, create meaning, and transfer it, thereby demanding higher cognitive abilities. The dynamic process of knowledge construction in design education, which includes integrating multidisciplinary knowledge, iterative prototyping, and decision-making in ambiguous contexts (Leifer & Steinert, 2011), offers a transferable framework for other fields dealing with complex issues, such as problem-driven teaching in STEM (Morrison et al., 2015) and cognitive development in the humanities (Darvin, 2006). These methods foster essential HOTS by engaging students in problem analysis and resolution.
Educators should analyze students' performance on specific tasks to cultivate HOTS effectively and integrate these insights into teaching strategies. Current research often independently analyzes various thinking skills within HOTS (Li et al., 2023) or treats HOTS as a unified whole (Sun, 2022). Few studies explore interactions between different HOTS components and their impact on cognitive outcomes. In practice, the components of thinking interact, significantly affecting learning activities. Focusing solely on individual thinking skills without considering the connections between multidimensional thinking limits understanding of HOTS's full potential in complex problem-solving.
The impact of AI on students' HOTS
The deepening application of AI in education has significantly altered students' thinking patterns and cognitive processes (Kamalov et al., 2023). In complex problem-solving, AI provides multi-perspective information and real-time feedback, facilitating comprehensive evaluation and judgment, thereby expanding conceptual space and promoting divergent thinking (Di Ieva, 2019). AI also enhances students' adaptability, fosters reflection, and inspires more effective actions (Song et al., 2022). Therefore, AI serves as a catalyst for stimulating students' HOTS, encouraging deeper, more flexible thinking and innovation in tackling complex problems. However, AI can also hinder students’ thinking patterns. If the balance between AI's efficiency and students' independent thinking is disrupted, it can adversely affect their development. A significant risk is overreliance on AI, which may replace cognitive functions and stifle logical thinking, critical thinking, creativity, and independent problem-solving (Yulianti et al., 2024). This dependence can diminish the depth and richness of educational experiences, undermining mastery of essential problem-solving skills (Montenegro-Rueda et al., 2023). Therefore, detrimental AI interventions could lead to an alienated state of "high efficiency, low autonomy" in students' HOTS development.
In design education, the structural contradiction between AI's efficiency and HOTS development is equally evident. Design emphasizes critical and creative thinking to find solutions to design problems (Allen, 2019), and this thinking mechanism can mutually reinforce the cultivation of HOTS. However, the immediacy of AI-generated concepts may compress the design concept generation phase, which is crucial for breakthrough innovation in design processes (Wadinambiarachchi et al., 2024). Moreover, because AI relies on big data and algorithmic models, its output may exhibit patterned characteristics, potentially leading students into the symbolic collage trap (Zhou & Lee, 2023). Additionally, bias in the AI's training dataset and the need to balance predictability and randomness in its interpretations could result in erroneous AI outputs (Van der Burg et al., 2022). If students accept these outputs uncritically during their learning process, they are at risk of negative influences.
Given AI's profound impact on students' thinking patterns, comprehensive, multi-layered research on AI's role in education is urgently needed to clarify its effectiveness in promoting HOTS. Such research would offer valuable insights and recommendations for educational practice. However, existing studies have not sufficiently explored students' active engagement and reflection on their thinking processes when using AI. Specifically, there is a lack of systematic empirical research on cultivating deep thinking and metacognition through AI. A significant gap remains between theoretical research and practical application in this field.
Current study
This study aims to explore how students use AI to solve complex problems, with a focus on the characteristics and differences in students’ core HOTS and task outcomes before and after AI intervention. The research seeks to provide feasible pathways for educational practice, exploring how AI technology can effectively promote the development and enhancement of students’ cognitive abilities. This study addressed four research questions:
RQ1: What differences exist between the sketches before and after using AI?
RQ2: How do students respond to and process AI-generated ideas?
RQ3: What differences in HOTS occur when solving complex problems before and after using AI?
RQ4: What differences exist in sketches and HOTS between students highly influenced by AI and those with low influence?
Methodology
Participants
For the number of participants, a priori sample size calculation was conducted in G*Power with α = 0.05, β = 0.80, and an expected effect size of Cohen's d = 0.49, calculating a minimum expected total number of participants of 35.
A total of 40 participants (28 females, 12 males; Mage = 19.28, SDage = 0.67) were recruited for this study. All were freshmen and sophomores majoring in design fields: visual communication design (17), digital media art design (14), and architectural design (9). As early-stage university students, they were particularly impressionable and receptive to teaching interventions, which significantly influence their thinking patterns and HOTS development (Al-Othman, 2014). At this stage, they were just beginning to engage with AI tools, without fixed dependencies or established workflows, making their thinking characteristics more genuine and natural. Visual communication design students had studied courses such as design composition, design psychology, and graphic modeling, developing basic skills in graphic design, color coordination, and user psychology. Digital media design students studied courses like design aesthetics, art design, and computer-aided design (like Adobe Photoshop), gaining knowledge in digital design tools and aesthetics. Architectural design students focused on spatial layout, architectural styles and structural design through courses such as design sketching and color, architectural design history, and architectural design principles. These courses provided them with foundational methods and theoretical frameworks for simple design tasks. All participants had prior experience using ChatGPT but no design practice experience, classifying them as novices.
All participants volunteered for the study. Before the experimental tasks, they were informed about the content and process of the study. Data generated from the experimental tasks would be collected and analyzed. Personal information of all participants was kept strictly confidential, and all data were anonymized during collection, ensuring confidentiality and anonymity. All participants were fully informed and gave their consent. After the experiment, each participant received a souvenir.
Experimental task
Based on the design problem context set in this study, the experiment constructed task modules involving multiple design problems. Participants were required to understand, evaluate, and redesign a given design sketch (Pr-sketch). The primary challenge for participants was to comprehend the Pr-sketch, identify existing issues, and devise new solutions, essentially addressing a complex problem-solving task. The Hairdryer was chosen as the design object for its accessibility as an everyday item, offering simplicity and creative potential, making it suitable for novices. The Pr-sketch was created by a freshman product design student who was not involved in this study. It was intentionally low in completion and lacked textual descriptions to provide participants ample room for understanding, evaluation, and optimization. The AI used was ChatGPT-4o, developed by OpenAI. It is a language model-based AI tool capable of providing students with multi-perspective explanations of design problems, heuristic questions, and guidance for critical thinking through natural language interaction, allowing students to demonstrate their HOTS (Lee et al., 2024). Additionally, it can also understand and generate images, effectively supporting designers with information, inspiration, and creativity. The experimental task consisted of five phases (Fig. 1):
Phase1: Introduce the complex problem context. Provide the Pr-sketch and ask participants to present their comprehension, evaluation, and ideas about the sketch.
Phase2: Instruct participants to create an optimized design sketch (Re-sketch) based on the Pr-sketch, using their understanding, experience, knowledge, and inspiration.
Phase3: Require participants to interact with the ChatGPT, discussing their understanding and evaluation of the Pr-sketch and related task knowledge.
Phase4: Repeat phase3, replacing the Pr-sketch with the Re-sketch.
Phase5: Instruct participants to further optimize the Pr-sketch, resulting in a final design sketch (Fi-sketch).
[See PDF for image]
Fig. 1
Flowchart of the experimental task
Phase1 and phase3, as well as phase2 and phase5, served as comparison groups. Phase4 acted as a transitional phase, extending phase3 to capture participants' thinking states when integrating AI, thereby leading into phase5. Participants used the "think-aloud" method throughout all tasks. During phase3 to phase5, they were allowed to use ChatGPT freely, with no restrictions on interactions, content, or format, aiming to create a natural AI usage environment to capture their thinking attributes and AI application patterns. During this process, screen recording was used to document the interaction data between the students and AI. No time limits were imposed, and participants could conclude each phase at their discretion. The average time to complete the task was 33.80 min (SD = 6.05).
Data analysis
Through the above experimental tasks, this study will obtain two categories of data to analyze the impact of AI on students' HOTS: think-aloud data and sketches data (Fig. 2).
[See PDF for image]
Fig. 2
Research questions and data analysis
Sketch data analysis
Three independent teachers, blind to the study’s objectives and conditions, used the Consensus Assessment Technique (CAT) (Bergstrom & Karahalios, 2007) to rate the Pr-sketch, Re-sketches, and Fi-sketches. All teachers are experienced design educators with over six years of teaching experience. They possess extensive expertise in both teaching and research and have evaluated students’ design concepts multiple times. The sketches were evaluated on two dimensions—originality and usefulness—using a 7-point Likert scale. Originality refers to novel ideas deviating from existing knowledge, and usefulness pertains to feasible solutions that generate economic and social returns. These dimensions are commonly used in evaluating design concepts (Ou et al., 2023).
The teachers were unaware of the relationships between the sketches, and the sketches' order was randomized to prevent consecutive appearances of sketches from the same participant, avoiding recognition of similarities. The scoring criteria were explained to the teachers before scoring, and they scored independently. They participated in multiple rounds of comparison and discussion of the scoring results, with an approximately five-day interval between each round, to reduce memory bias and provide sufficient time for reflection. The focus was also on the differences between the scoring results for each round. Teachers' consistency and agreement were assessed using the intraclass correlation coefficient (ICC), and achieving high reliability for both originality (0.75) and usefulness (0.76) (Brink & Wood, 1998). This grading method ensured that the final data remained objective and independent (Gero & Neill, 1998).
To analyze changes between the Re-sketch and Fi-sketch, a component list of the hairdryer was established based on the sketches, divided into three components: Airflow Barrel (Ab), Handle (Ha), and Power (Po). Each component was further subdivided for coding (Table 1). For the Re-sketch, coding involved marking the presence or absence of each component. For the Fi-sketch, changes were detailed into component idea relationships, categorized into five types (Table 2). An analysis of selected sketches is shown in Fig. 3.
Table 1. Hairdryer component codes
Component | Sub-component | Code |
|---|---|---|
Airflow Barrel | Wind direction/speed adjustment component | Ab1 |
Air outlet mesh cover | Ab2 | |
Telescopic/folding structure | Ab3 | |
Decorative elements | Ab4 | |
Detection component | Ab5 | |
Noise reduction component | Ab6 | |
Material | Ab7 | |
Counterweight | Ab8 | |
Handle | Switch | Ha1 |
Anti-slip | Ha2 | |
Display component | Ha3 | |
Ergonomic component | Ha4 | |
Power | Safety component | Po1 |
Power port | Po2 |
Table 2. Relationship of component ideas between Re-sketch and Fi-sketch
Relationship | Definition |
|---|---|
Direct Adoption | Directly applying the AI-generated content to the Fi-sketch without modifications |
Adaptation | Making simple adjustments to the AI-generated content to fit the Fi-sketch |
Iteration | Transforming the AI-generated content in a complex manner to break through the scope and adjust the Fi-sketch |
Recreation | Spontaneously creating new ideas unrelated to the AI-generated content |
Reduction | Removing design points from the Re-sketch (observed only once and not discussed in this study) |
[See PDF for image]
Fig. 3
Components and relationships between Fi-sketch and Re-sketch for selected participants
Coding scheme
Higher-order thinking (HOT) involves complex cognitive processes such as analysis, synthesis, comparison, reasoning, interpretation, evaluation, induction, and deduction (Zohar & Dori, 2003). Brookhart (2010) categorizes HOT into transfer, critical thinking, and problem-solving. Transfer is defined as applying acquired knowledge and skills to new situations. Sun et al., (2022) developed the S-HOT framework, which includes scientific reasoning, critical thinking, creative thinking, science self-efficacy, and metacognition, highlighting correlations between sub-skills. Researchers emphasize the importance of critical and creative thinking within HOT, particularly the central role of critical thinking (Miri et al., 2007). Most scholars identify three core skills within HOTS: critical thinking, creative thinking, and problem-solving thinking (Di et al., 2019; Hwang et al., 2018). Critical thinking involves objectively analyzing information and making sound judgments. Creative thinking entails creating new objects and innovative ideas through elaboration, refinement, and evaluation. Problem-solving thinking involves identifying problems, gathering relevant information, and proposing potential solutions (Hwang & Lai, 2017).
Cultivating these three thinking skills is crucial as they enable students to use complex cognitive abilities for problem-solving, decision-making, prediction, and judgment. Building on Hwang et al., (2018) categorization of HOTS and the thinking skills demonstrated by participants in this study, a HOTS coding scheme was constructed (Table 3).
Table 3. HOTS coding scheme
HOTS | Sub-HOTS | Definition | Code |
|---|---|---|---|
Critical Thinking | Interpretation | Understanding and expressing various experiences, situations, and data, including accurately categorizing data sources, conveying the meaning of information, and clarifying its implications | A1 |
Explanation | Justifying reasoning validity based on the context of evidence, concepts, and results | A2 | |
Evaluation | Assessing the logical strength of relationships between statements or expressions | A3 | |
Analysis | Breaking down data into parts, determining relationships between them, and their connection to the overall structure or purpose through differentiation, organization, and attribution | A4 | |
Inference | Identifying necessary elements for drawing conclusions, forming conjectures and hypotheses, considering relevant information, and deriving outcomes from data, statements, principles, and evidence | A5 | |
Reflection | Systematically reviewing and evaluating ideas, decisions, processes, and outcomes | A6 | |
Creative Thinking | Analogy | Establishing connections between parallel or similar concepts to help build relationships between ideas | B1 |
Association | Identifying and analyzing the interrelationships between different elements and linking them together | B2 | |
Substitution | Replacing one element with another to improve the concept or adapt to a specific situation | B3 | |
Synthesis | Combining different elements to create a new concept | B4 | |
Mutation | Generating a completely new concept that is detached from all reference points | B5 | |
Opportunity Scanning | Identifying, analyzing, and evaluating potential opportunities for innovation | B6 | |
Problem-solving Thinking | Problem Identification | Discovering, defining, and understanding the problem | C1 |
Information Gathering | Systematically acquiring relevant information through various methods and channels | C2 | |
Information Analysis | Systematically organizing, interpreting, and evaluating the collected information to extract useful insights and conclusions | C3 | |
Concept Development | Constructing potential concepts based on the identified and analyzed problems | C4 | |
Concept Selection | Making decisions and selections among multiple concepts | C5 |
A total of 22.53 h of audio were recorded and transcribed in this study. Each participant's think-aloud data was coded using the HOTS coding scheme. Two trained, experienced coders conducted the coding process. The coding results underwent multiple Kappa comparisons, with the coders engaging in several rounds of coding to address differences and reassess discrepancies, ultimately achieving a high level of agreement (0.71).
To comprehensively measure the connections between students' HOTS, we used Epistemic Network Analysis (ENA). ENA is a computational method comprising codes, analysis units, and sections (Zhang et al., 2021). It analyzes complex cognitive processes by identifying and quantifying connections within coded data and capturing temporal changes in cognition. Arastoopour et al., (2015) used ENA analyze the relationships between different elements of students' design thinking as they develop over time. Chang and Kuo (2024) applied ENA to explore the evolution of students' historical cognitive beliefs and their impact on the cultivation of historical HOTS.
Results
The score differences and component relationships between Re-sketches and Fi-sketches
For RQ1, the study analyzed participants' Re-sketch and Fi-sketch scores (Fig. 4) using a paired t-test to compare the differences. The results showed significant differences in both originality (Cohen's d = 0.793, t = − 5.018, p < 0.001) and usefulness (Cohen’s d = 1.119, t = -7.079, p < 0.001) between the Re-sketches and Fi-sketches. The Re-sketches (Moriginality = 2.80, SDoriginality = 0.91; Musefulness = 3.05, SDusefulness = 0.88) scored lower than the Fi-sketches (Moriginality = 3.38, SDoriginality = 0.75; Musefulness = 3.78, SDusefulness = 0.79).
[See PDF for image]
Fig. 4
Scores of Re-sketches and Fi-sketches
For RQ2, the study examined the differences between Re-sketches and Fi-sketches by coding and counting their components and component relationships. All participants generated a total of 310 components. The number of components in the Fi-sketches increased by 110.89% compared to the Re-sketches. Among the newly added components in the Fi-sketches, the majority were directly adopted from AI-generated ideas (61.35%), followed by adaptation (22.09%), recreation (9.82%), and iteration (6.75%).
Characteristics and differences in students' HOTS before and after using AI
Participants exhibited 3329 HOTS during the experimental tasks, with the highest frequency in "evaluation" under critical thinking and the lowest in "analogy" under creative thinking. Among the three primary categories of HOTS, critical thinking was used most frequently, while creative thinking was the least utilized (Fig. 5).
[See PDF for image]
Fig. 5
HOTS usage frequency
For RQ3, the study counted the frequency of each type of HOTS in the comparison groups and analyzed them using paired t-tests. Among the three primary HOTS and 17 sub-HOTS, 40 paired groups were examined, revealing significant differences (p < 0.05) in 18 pairs (Appendix Table A1). Among the three primary HOTS, significant differences were observed in all five paired groups except for problem-solving thinking between phase2 and phase5. In all instances, the usage frequency of HOTS was higher after using AI (phase3 and phase5) than before (phase1 and phase2).
Based on the HOTS coding results, we generated ENA graphs for each experimental task phase (Fig. 6). The colors red, blue, purple, green, and pink represent the centers of the epistemic network from phase1 to phase5, with corresponding squares indicating average centroids. The centroids reflect the average positions of the projection points. Observing the centroid positions sequentially, we find that each phase, except for phase3 to phase4, crosses quadrants, indicating substantial shifts in HOTS. ENA accounted for 31.6% of the co-occurrence variance of HOTS on SVD1 (X-axis) and 11.6% on SVD2 (Y-axis). The Mann–Whitney U test was used to assess the differences in the ENA space and showed no significant differences in HOTS between phase1 and phase3 (p = 0.09, r = − 0.22) and between phase2 and phase5 (p = 0.08, r = 0.23) on the X-axis. However, significant differences were found on the Y-axis between phase1 and phase3 (p < 0.01, r = 1) and between phase2 and phase5 (p < 0.01, r = 0.44).
[See PDF for image]
Fig. 6
HOTS paths of each phase
We used ENA to compare differences in HOTS connections between the comparison groups. In Fig. 7, black nodes represent the locations of HOTS codes, with node size indicating HOTS frequency and line thickness reflecting the strength of connections. In phase1, the strongest connections were between A3-A5 (0.18), A3-A4 (0.13), and A3-A2 (0.13). In phase3, strong connections were observed between A3-A6 (0.24), A3-C2 (0.16), and A6-C2 (0.13). The subtraction network revealed that the A3-A5 connection in phase1 was stronger than in phase3, whereas the connections of A3-C2, A3-A6, and A6-C2 were stronger in phase3 than in phase1.
[See PDF for image]
Fig. 7
ENA of phase1and phase3
Comparing the ENA of phase2 and phase5 (Fig. 8) showed strong connections in phase2 between C1-C4 (0.10) and C2-C4 (0.07). In phase5, strong connections were observed between A6-C4 (0.08), B4-C4 (0.08), and B4-B5 (0.08). Both phases had lower connection coefficients compared to the other phases. The subtraction network indicates that the difference in connection coefficients between these phases is relatively small.
[See PDF for image]
Fig. 8
ENA of phase2 and phase5
Characteristics and differences in component relationships between high-impact and low-impact participants
For RQ4, to better understand the impact of AI on students' HOTS and sketches in extreme cases, the study summed the originality and usefulness scores for each sketch. The change in each participant’s Fi-sketch score relative to their Re-sketch score was standardized using a z-score. The 10 participants with the highest and lowest changes were categorized as high-impact and low-impact participants (Table 4). This approach aims to analyze the most remarkable differences, providing an opportunity to identify potential characteristics that may be associated with the outcomes (Fox et al., 2009).
Table 4. Scores of Re-sketches and Fi-sketches for high-impact and low-impact participants
Level | Participant | Re-sketch | Fi-sketch | ||||
|---|---|---|---|---|---|---|---|
Originality | Usefulness | Total score | Originality | Usefulness | Total score | ||
High-impact | P4 | 1.67 | 2.67 | 4.33 | 3.67 | 4.67 | 8.33 |
P11 | 3.00 | 3.00 | 6.00 | 4.33 | 4.00 | 8.33 | |
P16 | 2.67 | 3.33 | 6.00 | 3.67 | 4.33 | 8.00 | |
P17 | 2.00 | 3.00 | 5.00 | 4.00 | 4.33 | 8.33 | |
P19 | 2.00 | 1.67 | 3.67 | 3.00 | 3.00 | 6.00 | |
P25 | 1.33 | 2.00 | 3.33 | 2.33 | 3.33 | 5.67 | |
P28 | 1.33 | 1.33 | 2.67 | 3.67 | 3.67 | 7.33 | |
P31 | 3.00 | 3.00 | 6.00 | 4.67 | 4.00 | 8.67 | |
P33 | 1.00 | 2.67 | 3.67 | 3.00 | 4.00 | 7.00 | |
P36 | 3.00 | 3.67 | 6.67 | 4.67 | 5.67 | 10.33 | |
Low-impact | P3 | 2.67 | 2.67 | 5.33 | 2.67 | 3.00 | 5.67 |
P5 | 3.00 | 3.67 | 6.67 | 3.67 | 3.00 | 6.67 | |
P7 | 4.33 | 4.33 | 8.67 | 4.00 | 4.00 | 8.00 | |
P8 | 3.67 | 4.00 | 7.67 | 3.33 | 4.00 | 7.33 | |
P9 | 3.67 | 4.67 | 8.33 | 4.00 | 4.67 | 8.67 | |
P13 | 5.00 | 3.67 | 8.67 | 4.67 | 4.00 | 8.67 | |
P21 | 3.33 | 3.33 | 6.67 | 3.33 | 3.67 | 7.00 | |
P24 | 3.00 | 3.00 | 6.00 | 2.67 | 3.00 | 5.67 | |
P27 | 3.33 | 3.67 | 7.00 | 3.00 | 4.00 | 7.00 | |
P39 | 2.67 | 3.00 | 5.67 | 3.00 | 3.00 | 6.00 | |
We calculated the number of component relationships between Re-sketches and Fi-sketches for both high-impact and low-impact participants (Fig. 9) and performed a t-test to analyze differences. The results showed a significant difference in the adaptation relationship (Cohen’s d = 1.200, t = − 2.683, p = 0.020), with high-impact participants (M = 1.80, SD = 1.32) using this relationship more frequently than low-impact participants (M = 0.60, SD = 0.52). No significant differences were found for other relationships.
[See PDF for image]
Fig. 9
Component relationships between Re-sketches and Fi-sketches for high-impact and low-impact participants
HOTS characteristics and differences in AI use among high-impact and low-impact participants
For RQ4, the study examined the characteristics and differences in HOTS between high-impact and low-impact participants after using AI (phase3 to phase5). The t-test results (Appendix Table A2) revealed significant differences in the sub-HOTS: A3 (Cohen’s d = 0.955, t = 2.136, p = 0.047), A6 (Cohen's d = 1.543, t = 3.450, p = 0.003), B2 (Cohen’s d = 1.500, t = 3.354, p = 0.008), B4 (Cohen's d = 1.034, t = 2.312, p = 0.033), and C2 (Cohen’s d = 1.892, t = 4.231, p = 0.001). Significant differences were also found in the primary HOTS categories: critical thinking (Cohen's d = 1.131, t = 2.529, p = 0.021) and creative thinking (Cohen's d = 1.198, t = 2.678, p = 0.015). Except for A2, A5, and C1, high-impact participants exhibited a greater number of HOTS than low-impact participants.
On this basis, we analyzed the ENA characteristics (Fig. 10), with red and blue representing the centers of the epistemic networks for high-impact and low-impact participants, respectively. The corresponding squares indicate the average centroids for each group. ENA accounted for 23.9% of the co-occurrence variance of HOTS on MR1 (X-axis) and 21.1% on SVD2 (Y-axis). The Mann–Whitney U test revealed a significant difference in HOTS between high-impact and low-impact participants on the X-axis (p < 0.01, r = -0.84), but no significant difference on the Y-axis (p = 0.97, r = 0.02).
[See PDF for image]
Fig. 10
ENA of high-impact participants and low-impact participants
High-scoring participants showed the strongest connection between A3-A6 (0.25), followed by A3-C2 (0.19) and A6-C2 (0.13). Low-scoring participants also exhibited strong connections between A3-A6 (0.17) and A3-C2 (0.13), followed by A2-A6 (0.11). The subtraction network indicated that connections between A3-A6 and A3-C2 were significantly stronger for high-scoring participants, suggesting these linking patterns were more frequent among them.
Discussion
Analyzing how students solve complex problems with AI support enhances our understanding of AI's impact on their HOTS and task outcomes, providing a foundation for improving education and maximizing AI’s utility. This study explores the characteristics and differences in students' HOTS performance and sketches before and after using AI. The findings show that, while students’ sketches improved in originality and usefulness after AI use, the extent of improvement was minimal. Additionally, the number of HOTS used increased significantly, stimulating connections between different categories.
The impact of AI on students' sketches
The study found that students' sketches received higher scores after using AI; however, the differences were minimal, making it difficult to demonstrate AI's adequate support at the task outcome level. Analyzing component relationships showed that although the total number of components in the Fi-sketches exceeded those in the Re-sketches, students often directly adopted AI-generated ideas. This suggests a tendency to accept AI-generated content passively without deep thinking or further innovation. This may be due to the tendency to prioritize efficiency when faced with complex problems, leading students to readily adopt AI suggestions. Additionally, students with a lack of confidence or insufficient understanding of the task may be more inclined to view AI outputs as authoritative answers, which reduces their motivation for active thinking and exploration (Zhang et al., 2024). Since AI-generated content is based on existing data, it may limit students' personalized expression and non-traditional thinking, leading to more novel and useful sketches that lack a qualitative leap. Furthermore, because AI cannot always ensure accuracy and appropriateness, students may struggle to discern relevant information, particularly in cases of direct adoption, potentially hindering outcome optimization.
Comparing the sketch components and component relationships of high-impact and low-impact participants further clarified this phenomenon. High-impact participants primarily used direct adoption but utilized adaptation relationships more frequently than low-impact participants. By processing the original information and beginning to move beyond AI's boundaries, they achieved a more significant increase in their scores (Engel & Reich, 2015).
This indicates that while AI can help design students produce sketches with more outstanding originality and usefulness, its support may be limited to optimizing existing ideas or providing incremental improvements. Students might use AI to refine sketch details, enhancing efficiency and surface quality without significantly advancing their core design abilities. Although there are differences in the research focus, this result is similar to the findings of Lawasi et al., (2024), concluding that AI can significantly aid in developing critical thinking skills, but its effectiveness depends on the students' thinking abilities. This echoes the phenomenon we observed, where the impact of AI assistance is limited by the students' own traits. This phenomenon underscores the need for educational practices to evolve alongside technological tools, ensuring students transition from superficial imitation to mastering and applying their skills effectively.
The impact of AI on students' HOTS
This study analyzed students' HOTS before and after using AI, revealing that AI subtly influences cognitive processes in complex problem-solving. While these effects may not always be reflected in outcomes, they provide valuable insights into AI's future role in enhancing thinking skills. After using AI, the number of participants' HOTS increased significantly, indicating that AI can stimulate deeper thinking by offering multiple design options or real-time feedback. With diverse AI-generated choices, students engage in deeper analysis and continuous evaluation, supporting HOTS development. AI helps students adopt a global perspective, clarifying the relationships between factors and enabling better understanding of the problem's structure, thereby facilitating HOTS application in analysis and synthesis.
The types of connections between HOTS also changed. Phase1 and phase3 focused on ‘‘understanding, evaluating, and proposing ideas’’, with evaluative thinking as the central HOTS. In phase1, evaluation was closely linked to inference, analysis, and explanation, with critical thinking as the core skill. This suggests that when students worked independently, their limited knowledge and methods activated prior knowledge, constraining their cognitive scope (Treffinger et al., 2008). The thinking connections were mostly linear within a single category. Think-aloud data analysis showed that in this phase, connections between evaluation, inference, analysis, and explanation primarily involved assessing the Pr-sketch's strengths and weaknesses. In phase3, evaluation remained frequent, followed by reflection, both connecting with information gathering to form a triadic connection, becoming the core HOTS connection in this phase. From phase1 to phase3, evaluation shifted from the focal point to a node, forming a more complex cognitive network. For example, in phase1, P25 primarily focused on evaluating the feasibility and practicality of the Pr-sketch. However, in phase3, with the introduction of AI, P25 not only reconsidered the Pr-sketch itself but also sought additional information from the AI-generated content. Moreover, P25 actively reflected on the issues in their initial evaluation of the Pr-sketch in phase1 and subsequently re-evaluated it based on this new insight. This triangular connection suggests that, with AI support, students engaged more in systematic thinking, combining evaluation with reflection and using information gathering to validate and adjust ideas. This is exemplified by the study of Essel et al., (2024), which demonstrated that incorporating ChatGPT influenced students' critical and reflective thinking skills. AI integration in phase3 provided more resources, prompting students to rely on AI for information rather than solely on inference and analysis skills. They assessed whether AI-generated information aligned with their needs, shifting from autonomous inference and analysis to reflecting on and evaluating both AI-provided information and their own thinking processes. Thus, AI altered students' thinking processes.
Phase2 and phase5 focused on "redesign". Although the HOTS connection coefficients between these phases were similar, they displayed distinct characteristics in strong connection types. In phase2, problem-solving thinking was central, linking problem identification, information gathering, and concept development. Think-aloud data showed that students mainly relied on personal judgment or analyzed existing features to identify issues and propose solutions. Although direct and efficient, this approach limited exploration of alternative ideas. This linear thinking pattern may restrict creativity, leading students to favor familiar solutions. In phase5, critical connections spanned all three primary HOTS categories, forming a multi-threaded thinking pattern. Building on phases3 and phases4, AI-expanded information prompted deeper reflection on the Pr-sketch and Re-sketch, leading students to identify omissions, consider more variables, and integrate new elements. For example, in phase2, P11 primarily relied on the evaluation results of the Pr-sketch from phase1 and supplemented this with features of related products available in the market to address the defects in the Pr-sketch and complete the redesign. In phase5, P11 reflected on the Pr-sketch, the Re-sketch completed in phase2, and their own thought process from phase2, based on the AI-generated content. They extracted valuable elements from the AI output to further redesign the Pr-sketch, ultimately producing the Fi-sketch. AI often provided unexpected options, encouraging more creative exploration and critical evaluation, shifting their thinking from a linear problem-to-solution approach to a more diverse application of HOTS.
A comprehensive analysis of the HOTS characteristics of high-impact and low-impact participants after using AI revealed that high-impact participants exhibited more frequent use of HOTS. The number of HOTS in comparison groups suggests that AI effectively stimulates a broader range of thinking skills, supporting complex problem-solving and better outcomes. High-impact participants demonstrated strong connections between evaluation, reflection, and information gathering, forming an epistemic network like phase3. While low-impact participants also showed connections between these elements, they primarily linked reflection with explanation, unlike high-impact participants who connected reflection with information gathering, while low-impact participants connected reflection with explanation. This suggests that the main difference between the groups lies in the transition between HOTS categories. High-impact participants likely utilized AI's functions more effectively, engaging in cyclical connections across multiple HOTS categories. In contrast, low-impact participants, though using multiple types of HOTS, relied more on AI for information retrieval and basic evaluation, with their connections being more linear and less complex. This indicates that AI literacy will help students integrate AI effectively and engage and develop HOTS, which was also suggested by Ngo and Hastie (2025).
Opportunities and challenges of AI application in education
Aligning educational practices with the evolution of technological tools is crucial. For design educators, the focus should shift towards helping students transition from users of technological tools to creators empowered by technology, fostering their ability to retain independence while enhancing their comprehensive thinking and application of HOTS with technological support. To achieve this, educators can implement strategies that cultivate students' HOTS. For instance, they can require students to reverse deduce AI-generated results, trace and analyze the AI's thought process, and attempt to demystify the AI ‘‘black box’’. By doing so, they can identify potential logical flaws or unreasonable patterns, thereby enhancing their ability to evaluate and discern information; clarify the boundaries of human and AI contributions in tasks, educators can encourage students to write reflection journals documenting their interactions with AI, highlighting the cognitive shifts triggered by AI. This process helps to enhance students' metacognitive awareness and self-regulation, allowing them to maintain agency in the creative process; educators can also guide students to engage in multidisciplinary collaboration, encouraging multi-perspective and multi-level discussions with AI. Additionally, establishing a peer review process allows students to evaluate each other's and AI's outputs, fostering collective identification of potential biases and creative opportunities, thereby refining and improving their work. These approaches help students develop HOTS, fostering deeper thinking and independence in problem-solving and innovation.
Traditional education often separates HOTS, overlooking the internal mechanisms of HOTS. This can lead students to neglect the multidirectional connections between thinking skills, resulting in a rigid application of HOTS (Resnick, 2023). Thus, while this study has demonstrated AI's potential in promoting HOTS, the differences in HOTS observed between high-impact and low-impact participants after AI intervention highlight the importance of students' HOTS connection patterns in problem-solving. In design education, educators can help students establish deep HOTS connections by designing targeted tasks and providing timely feedback while using AI tools. Furthermore, by leveraging the transparency of the AI reasoning process, educators can guide students to expand objects of reflection, and promote the mutual connection and stimulation between their HOTS and the AI reasoning process. This cross-dimensional connection offers students new ‘‘HOTS networks’’ that promote the development of more advanced systems thinking.
If students become reliant on AI, it may hinder their critical thinking and initiative, reducing their use of HOTS and limiting cognitive space, which negatively impacts their thinking and outcomes. However, students in this study showed skepticism toward AI-generated content. For example, P31, upon receiving a templated response from ChatGPT in phases3 and phases4, noted, ‘‘It seems to be applying a template, and I suspect it’s making things up’’. This skepticism led P31 to critically evaluate the AI’s output, reflecting on their own thinking and identifying opportunities for improvement. Such critical awareness is important, as AI sometimes generates partially factual responses to appear credible. This skepticism encourages students to actively challenge the credibility of AI and their own existing cognitive frameworks, attempting to infer its reasoning path through the AI's responses. It prompts students to critically supervise and assess the information received from AI, stimulating the application of HOTS and enhancing their thinking abilities. The emergence of skepticism provides an opportunity for a deeper exploration of cognitive patterns in design education, especially in developing students' systematic and rational thinking (Cwiakala, 2024). Educators can transform skepticism into a driving force for cognitive advancement, using the cognitive conflict between humans and AI to encourage students to break free from entrenched thinking patterns. For example, educators can guide students in collaborative tasks to compare AI outputs with peer feedback or their own reasoning through reflective exercises, thereby identifying differences in understanding, clarifying the sources of errors, and using these insights as an opportunity to foster knowledge construction and enrich perspectives. This will enable students to proactively challenge conventions and explore diverse solutions when solving complex problems.
For AI, it is not just a passive feedback system, but an active collaborator in the thinking process. By posing challenging questions, engaging in reverse reasoning, or providing information from different perspectives, AI stimulates students’ HOTS, encouraging them not only to solve immediate problems but also to delve into the underlying logic and assumptions. Therefore, enhancing the transparency and interpretability of AI can improve students' understanding and enhance their learning experience. Through this process, students can develop clearer and more feasible cognitive paths.
As AI technology advances, enhancing students' AI literacy is essential for maximizing its educational potential. This study’s findings extend to fields focused on complex problem-solving and innovation. For instance, in engineering education, educators can guide students to use AI in engineering design and decision-making, encouraging them to analyze AI's assumptions and optimization processes, leveraging AI to generate multiple engineering design solutions while enabling students to perform rapid prototyping iterations, thereby stimulating their creative thinking and evaluation skills, thus developing HOTS with technological support. In information science education, educators can guide students to reverse deduce AI-generated code or data patterns, evaluating the logic and algorithms. In this process, students can try to identify potential biases or limitations in AI outputs, further cultivating their critical mindset and reflective awareness towards AI. In sociology education, educators can guide students to use AI to simulate social behavior outcomes, assess their impact on different groups, and engage in reflective analysis to propose optimizations, stimulating the integration and application of HOTS. Through these methods, AI provides new frameworks and approaches for developing thinking in educational practice.
Limitations
This study has several limitations. First, it focused only on students solving complex problems independently or with AI, without considering educators' crucial role in natural educational settings. Future research should explore AI use within teaching environments. Second, this study examined only design problems as complex problems. While this approach provides targeted insights, it limits the generalizability of the research findings to a certain extent. Future research should expand to various disciplines and different complex task contexts in educational practice, to enhance the broader applicability of the results. Third, the study has a small sample size, and the sample selection is relatively homogeneous, which limits its generalizability. Therefore, future research should involve larger-scale studies and explore how students with varying levels of learning readiness, cultural backgrounds, or familiarity with AI respond differently to AI prompts, in order to analyze its applicability in different contexts. Additionally, considering that AI models are continuously updated, their generation logic and presentation may also change. Therefore, the study of AI's impact on higher education will be ongoing, and future research could conduct experiments on the same tasks after model updates, to analyze the potential effects of model evolution more systematically on students' learning behaviors and cognitive performance.
Conclusion
AI's data collection and analysis capabilities can encourage students to engage in more extensive exploration and innovation during complex problem-solving. However, students' overreliance on AI and privacy concerns poses significant challenges. This study, focusing on HOTS and using ChatGPT as an example, reveals the differences and characteristics in HOTS application and task outcomes among design students before and after using AI to solve complex design problems. For RQ1, this study found that although students' sketch scores improved after using AI, the increase was minimal. This indicates that while AI is effective in optimizing details, it still has limited potential in driving design breakthroughs, and the impact is also related to the students' abilities. For RQ2, the analysis revealed that most students tend to directly adopt AI-generated content, and the number of components in their concepts significantly increased. This suggests that AI plays a positive role in expanding design options, but educators need to monitor students' cognitive engagement and initiative when using AI. For RQ3, students exhibited more frequent and diverse use of HOTS after using AI, with a more varied relational structure between different sub-HOTS. For RQ4, high-impact participants demonstrated more complex cognitive networks, showing the ability to switch flexibly between multiple HOTS categories, while low-impact participants’ HOTS performance was more linear, relying more on AI for information retrieval and initial judgments. Additionally, the high-impact participants used adaptation relationships more frequently in their sketches, demonstrating an advantage in design adjustment and cognitive flexibility. These results offer educators strategies for integrating AI into learning, helping them harness its potential in higher education while preventing misuse.
Design education, with its multidisciplinary nature and complex problems, is ideal for studying and developing students' HOTS. Based on this, the results of this study provide a reference for disciplines involving complex decision-making and innovation. By integrating AI into these disciplines, educators can better support students in maintaining critical thinking and creativity while fostering interdisciplinary knowledge integration. This study serves as a foundation for future research on using AI to enhance cognitive development in education and offers valuable insights for educators and students on effectively incorporating AI into the learning process.
Acknowledgements
The authors thank all the students who participated in the experiment.
Author contributions
Xing Du performed data collection, analysis, and manuscript writing; Mingcheng Du performed data collection and analysis; Zihan Zhou performed manuscript writing and data collection; Yiming Bai performed data analysis. All authors reviewed and approved the final manuscript.
Funding
This work is supported by the Philosophy and Social Science Foundation of Hunan Province (project number: 24YBQ117) and the Education Department of Hunan Province (project number: 202401000055).
Data availability
Data will be made available on reasonable request.
Declarations
Competing interests
The authors declare that there are no competing interests in this manuscript.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Ahmad, K., Qadir, J., Al-Fuqaha, A., Iqbal, W., El-Hassan, A., Benhaddou, D., & Ayyash, M. (2020). Artificial intelligence in education: A panoramic review. https://doi.org/10.35542/osf. io/zvu2n.
Allen, T. Solving critical design problems: Theory and practice; 2019; Routledge: [DOI: https://dx.doi.org/10.4324/9780429398872]
Al-Othman, FH. The impact of good quality instructions of early education on the performance of university newcomers. Journal of Education and Learning; 2014; 3,
Altaleb, H., Mouti, S., & Beegom, S. (2023). Enhancing college education: An AI-driven adaptive learning platform (ALP) for customized course experiences. In 2023 9th International Conference on Optimization and Applications. IEEE. 1–5. https://doi.org/10.1109/ICOA58279.2023.10308834.
Al-Zahrani, AM. Unveiling the shadows: Beyond the hype of AI in education. Heliyon; 2024; 10,
Åman, P; Andersson, H; Hobday, M. The scope of design knowledge: integrating the technically rational and human-centered dimensions. Design Issues; 2017; 33,
Arastoopour, G; Shaffer, D; Swiecki, Z; Ruis, A; Chesler, N. Teaching and assessing engineering design thinking with virtual internships and epistemic network analysis. International Journal of Engineering Education; 2015; 32, pp. 1492-1501.
Aryal, S; Do, T; Heyojoo, B; Chataut, S; Gurung, BD; Gadhamshetty, V; Gnimpieba, EZ. Leveraging multi-AI agents for cross-domain knowledge discovery. Language Circle Journal of Language and Literature; 2024; [DOI: https://dx.doi.org/10.48550/arXiv.2404.08511]
Aryani, EJ; Wahyuni, S. An analysis of higher order thinking skills realization in reading comprehension questions. Language Circle: Journal of Language and Literature; 2020; 15,
Baty, S. Solving complex problems through design. Interactions; 2010; 17,
Bergstrom, T; Karahalios, K. Baranauskas, C. Seeing more: Visualizing audio cues. IFIP Confer on Human Computer Interaction; 2007; Springer:
Brink, PJ; Wood, MJ. Advanced design in nursing research; 1998; Sage Publications: [DOI: https://dx.doi.org/10.4135/9781452204840]
Brookhart, S. How to assess higher-order thinking skills in your classroom; 2010; ASCD:
Chandrasekaran, B. Design problem solving: A task analysis. AI Magazine; 1990; 11,
Chang, CC; Kuo, HC. "History is like an old story!": Navigating the trajectories of historical epistemic beliefs through epistemic network analysis in Taiwanese high school students' perspectives. Thinking Skills and Creativity; 2024; 51, [DOI: https://dx.doi.org/10.1016/j.tsc.2023.101410] 101410.
Cross, N. Design thinking: Understanding how designers think and work; 2023; Bloomsbury Publishing: [DOI: https://dx.doi.org/10.5040/9781350305090]
Cwiakala, M. To AI is human: How AI tools with their imperfections enhance learning. Journal of Systemics, Cybernetics and Informatics; 2024; 22,
Darvin, J. "Real-world cognition doesn't end when the bell rings": Literacy instruction strategies derived from situated cognition research. Journal of Adolescent & Adult Literacy; 2006; 49,
Di Ieva, A. AI-augmented multidisciplinary teams: Hype or hope?. The Lancet; 2019; 394,
Di, W; Danxia, X; Chun, L. The effects of learner factors on higher-order thinking in the smart classroom environment. Journal of Computers in Education; 2019; 6,
Engel, A; Reich, Y. Advancing architecture options theory: Six industrial case studies. Systems Engineering; 2015; 18,
Essel, HB; Vlachopoulos, D; Essuman, AB; Amankwa, JO. ChatGPT effects on cognitive skills of undergraduate students: Receiving instant responses from AI-based conversational large language models (LLMs). Computers and Education: Artificial Intelligence; 2024; 6, [DOI: https://dx.doi.org/10.1016/j.caeai.2023.100198] 100198.
Fox, MF; Sonnert, G; Nikiforova, I. Successful programs for undergraduate women in science and engineering: Adapting versus adopting the institutional environment. Research in Higher Education; 2009; 50,
Gero, JS; Mc Neill, T. An approach to the analysis of design protocols. Design Studies; 1998; 19,
Grace, EG; Vidhyavathi, P; Malathi, P. AI in education: Opportunities and challenges for personalized learning. Industrial Engineering Journal; 2023; 52,
Greiff, S; Fischer, A. Measuring complex problem solving: An educational application of psychological theories. Journal for Educational Research Online; 2013; 5, pp. 34-53.
Herrington, J. Authentic learning environments in higher education; 2005; IGI Global:
Hwang, GJ; Lai, CL. Facilitating and bridging out-of-class and in-class learning: An interactive e-book-based flipped learning approach for math courses. Journal of Educational Technology & Society; 2017; 20,
Hwang, GJ; Lai, CL; Liang, JC; Chu, HC; Tsai, CC. A long-term experiment to investigate the relationships between high school students' perceptions of mobile learning and peer interaction and higher-order thinking tendencies. Educational Technology Research and Development; 2018; 66,
Ivanov, S. The dark side of artificial intelligence in higher education. The Service Industries Journal; 2023; 43,
Kamalov, F; Santandreu Calonge, D; Gurrib, I. New era of artificial intelligence in education: Towards a sustainable multifaceted revolution. Sustainability; 2023; 15,
Krakowski, S; Luger, J; Raisch, S. Artificial intelligence and the changing sources of competitive advantage. Strategic Management Journal; 2023; 44,
Kshirsagar, PR; Jagannadham, DBV; Alqahtani, H; Noorulhasan Naveed, Q; Islam, S; Thangamani, M; Dejene, M. Human intelligence analysis through perception of AI in teaching and learning. Computational Intelligence and Neuroscience; 2022; 2022,
Lawasi, MC; Rohman, VA; Shoreamanis, M. The use of AI in improving student’s critical thinking skills. Proceedings Series on Social Sciences & Humanities; 2024; 18, pp. 366-370. [DOI: https://dx.doi.org/10.30595/pssh.v18i.1279]
Lee, HY; Chen, PH; Wang, WS; Huang, YM; Wu, TT. Empowering ChatGPT with guidance mechanism in blended learning: Effect of self-regulated learning, higher-order thinking skills, and knowledge construction. International Journal of Educational Technology in Higher Education; 2024; 21,
Leifer, LJ; Steinert, M. Dancing with ambiguity: Causality behavior, design thinking, and triple-loop-learning. Information Knowledge Systems Management; 2011; 10,
Li, W; Huang, JY; Liu, CY; Tseng, JC; Wang, SP. A study on the relationship between student' learning engagements and higher-order thinking skills in programming learning. Thinking Skills and Creativity; 2023; 49, [DOI: https://dx.doi.org/10.1016/j.tsc.2023.101369] 101369.
Liang, Y. Balancing: The effects of AI tools in educational context. Frontiers in Humanities and Social Sciences; 2023; 3,
Mafada, AA; Kusmayadi, TA; Fitriana, L. Identification of mathematical reasoning ability in solving higher order thinking skills problems International Conference on Learning Innovation and Quality Education; 2020; Atlantis Press:
Maxwell, T; Ertas, A; Tanik, MM. Harnessing complexity in design. Trans SDPS; 2002; 6, pp. 63-74.
McCormick, NJ; Clark, LM; Raines, JM. Engaging students in critical thinking and problem solving: A brief review of the literature. Journal of Studies in Education; 2015; 5,
Miri, B; David, BC; Uri, Z. Purposely teaching for the promotion of higher-order thinking skills: A case of critical thinking. Research in Science Education; 2007; 37,
Montenegro-Rueda, M; Fernández-Cerero, J; Fernández-Batanero, JM; López-Meneses, E. Impact of the implementation of ChatGPT in education: A systematic review. Computers; 2023; 12,
Morrison, J; Roth McDuffie, A; French, B. Identifying key components of teaching and learning in a STEM school. School Science and Mathematics; 2015; 115,
Murray, A; Rhymer, JEN; Sirmon, DG. Humans and technology: Forms of conjoined agency in organizations. Academy of Management Review; 2021; 46,
Ngo, TN; Hastie, D. Artificial Intelligence for Academic Purposes (AIAP): Integrating AI literacy into an EAP module. English for Specific Purposes; 2025; 77, pp. 20-38. [DOI: https://dx.doi.org/10.1016/j.esp.2024.09.001]
Ou, X; Goldschmidt, G; Erez, M. The effect of disciplinary diversity on design idea generation in dyadic teams. Design Studies; 2023; 86, [DOI: https://dx.doi.org/10.1016/j.destud.2023.101184] 101184.
Parimaladevi, P; Ahmad, A. The implementation of higher-level thinking skills (HOTS) in history education The 2nd International Conf on Sustainable Development and Multi-Ethnic Society; 2019; Redwhite Pres:
Pratama, MP; Sampelolo, R; Lura, H. Revolutionizing education: Harnessing the power of artificial intelligence for personalized learning. Klasikal Journal of education, language teaching and science; 2023; 5,
Raiyn, J; Tilchin, O. The impact of adaptive complex assessment on the HOT skill development of students. World Journal of Education; 2016; 6,
Rajkumar, N; Ramalingam, V. Cognitive intelligent tutoring system based on affective state. Indian Journal of Science and Technology; 2015; 8, pp. 1-10. [DOI: https://dx.doi.org/10.17485/ijst/2015/v8i24/80145]
Resnick, MS. Teachers' presentation of higher-order thinking questions and student engagement: Missing out on HOT opportunities. Thinking Skills and Creativity; 2023; 50, [DOI: https://dx.doi.org/10.1016/j.tsc.2023.101412] 101412.
Robert, A; Potter, K; Frank, L. The impact of artificial intelligence on students' learning experience. Wiley Interdisciplinary Reviews: Computational Statistics.; 2024; [DOI: https://dx.doi.org/10.2139/ssrn.4716747]
Song, B; Gyory, JT; Zhang, G; Zurita, NFS; Stump, G; Martin, J; Miller, S; Balon, C; Yukish, M; McComb, C; Cagan, J. Decoding the agility of artificial intelligence-assisted human design teams. Design Studies; 2022; 79, [DOI: https://dx.doi.org/10.1016/j.destud.2022.101094] 101094.
Strzelecki, A. To use or not to use ChatGPT in higher education? A study of students' acceptance and use of technology. Interactive Learning Environments.; 2023; [DOI: https://dx.doi.org/10.1080/10494820.2023.2209881]
Sun, H; Xie, Y; Lavonen, J. Exploring the structure of students' scientific higher order thinking in science education. Thinking Skills and Creativity; 2022; 43, [DOI: https://dx.doi.org/10.1016/j.tsc.2022.100999] 100999.
Treffinger, DJ; Selby, EC; Isaksen, SG. Understanding individual problem-solving style: A key to learning and applying creative problem solving. Learning and Individual Differences; 2008; 18,
Van der Burg, V; Salah, AA; Chandrasegaran, RSK; Lloyd, PA. Ceci n'est pas une chaise: Emerging practices in designer-AI collaboration In DRS 2022. Design Research Society; 2022; [DOI: https://dx.doi.org/10.21606/drs.2022.653]
Wadinambiarachchi, S; Kelly, RM; Pareek, S; Zhou, Q; Velloso, E. The effects of generative ai on design fixation and divergent thinking. Proceedings of the CHI Confer on Human Factors in Computing Systems; 2024; [DOI: https://dx.doi.org/10.1145/3613904.3642919]
Widyaningrum, D; Utaminingsih, S. HOTS-based scientific learning to increase the comprehension concept and science students skill. Journal of Physics: Conference Series.; 2021; [DOI: https://dx.doi.org/10.1088/1742-6596/1823/1/012092]
Wu, F; Hsiao, SW; Lu, P. An AIGC-empowered methodology to product color matching design. Displays; 2024; 81, [DOI: https://dx.doi.org/10.1016/j.displa.2023.102623] 102623.
Yilmaz, R; Yilmaz, FGK. Augmented intelligence in programming learning: Examining student views on the use of ChatGPT for programming learning. Computers in Human Behavior: Artificial Humans; 2023; 1,
Yulianti, PD; Yandhini, S; Sari, ADP; Herawani, I; Oktarini, I. The influence of AI on students' mind patterns. BICC Proceedings; 2024; 2, pp. 183-186. [DOI: https://dx.doi.org/10.3983/bicc.v1i1.125]
Zhang, S; Chen, J; Wen, Y; Chen, H; Gao, Q; Wang, Q. Capturing regulatory patterns in online collaborative learning: A network analytic approach. International Journal of Computer-Supported Collaborative Learning; 2021; 16,
Zhang, S; Zhao, X; Zhou, T; Kim, JH. Do you have AI dependency? The roles of academic self-efficacy, academic stress, and performance expectations on problematic AI usage behavior. International Journal of Educational Technology in Higher Education; 2024; 21,
Zhou, E; Lee, D. Generative AI, human creativity, and art; 2023; Rochester: [DOI: https://dx.doi.org/10.2139/ssrn.4594824]
Zohar, A; Dori, YJ. Higher order thinking skills and low-achieving students: Are they mutually exclusive?. Journal of the Learning Sciences; 2003; 12,
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.