Content area

Abstract

This paper presents the findings of a pilot study aimed at gaining deeper insights into student errors in solving mathematics tasks from the Czech national school-leaving examination (maturita), while also exploring the potential of artificial intelligence (AI) to support error analysis and provide targeted feedback. The study began with an analysis of publicly available CERMAT data, focusing on tasks that have consistently shown low success rates over the years. Based on this analysis, a subset of tasks was selected and further tested on students preparing for the exam. The results were compared with national statistics to validate the relevance of the identified difficulties. A revised version of the test was then developed and administered to a new cohort of students, enabling the collection of a dataset of real student solutions for qualitative error analysis. The study adopted a nuanced framework for error classification, distinguishing between "slips" (minor, often procedural errors) and "true errors" stemming from a lack of conceptual understanding. Emphasis was placed on understanding the nature and origin of these errors, their recurrence, and implications for learning. Student work was analysed in all phases of the error-handling process, including detection, diagnosis, explanation, and correction. At the same time, the study evaluated selected AI tools, primarily ChatGPT 4.0-for their potential to solve exam-level mathematics tasks at the university level and identify errors in student solutions. Multiple test items were processed through the AI system, and its responses were compared with those of students. Particular attention was given to the AI's behaviour when confronted with incorrect or incomplete answers. The results revealed both the promise and limitations of current AI models in supporting formative assessment, particularly with respect to misinterpretation of task wording, difficulty in recognising alternative valid strategies, and occasional inconsistency in the quality of feedback. The findings contribute to the broader discussion on how AI can be effectively integrated into educational practice-not as a replacement for teacher judgment, but as a supplementary tool to enhance student understanding, develop metacognitive skills, and improve preparation for high-stakes assessments such as the maturita exam.

Details

Business indexing term
Title
Understanding and Supporting Student Problem Solving in Mathematics Exams with Artificial Intelligence
Publication title
Pages
113-121
Number of pages
10
Publication year
2025
Publication date
Oct 2025
Publisher
Academic Conferences International Limited
Place of publication
Kidmore End
Country of publication
United Kingdom
ISSN
2048-8637
e-ISSN
2048-8645
Source type
Conference Paper
Language of publication
English
Document type
Conference Proceedings
ProQuest document ID
3279067061
Document URL
https://www.proquest.com/conference-papers-proceedings/understanding-supporting-student-problem-solving/docview/3279067061/se-2?accountid=208611
Copyright
Copyright Academic Conferences International Limited 2025
Last updated
2025-12-05
Database
ProQuest One Academic