Headnote
Abstract
Decision-making tends to be more accurate and of higher quality when there's a sensible harmony between self-confidence and actual capabilities. Overconfidence makes it difficult to set realistic goals in academic settings and increases the likelihood of facing failure. In this study, the academic achievements and overconfidence of students enrolled in the Database Management Systems course were examined. The research also aimed to determine whether there is a difference between midterm and final exams in terms of these variables. The participants were comprised of students enrolled in the Computer and Instructional Technologies Education department of a state university throughout the 2021-2022 academic year. The results indicated that approximately two-thirds of the students did not achieve satisfactory academic scores. Students struggled to accurately assess their exam performances, and a significant number of them overestimated their positions in both the midterm and final exams. Furthermore, there was no significant change between the midterm and final exams for any of the three factors.
Keywords: Academic Success, Computer and Instructional Technologies Education, Teacher Candidates, Database Management System, Overestimation, Overplacement
Öz
Karar verme, özgüven ile gerçek yetenek arasında mantıklı bir uyum olduğunda daha doğru ve yüksek kalitede olma eğilimindedir. Aşırı özgüven, akademik ortamlarda gerçekçi hedefler belirlemeyi zorlaştırır ve başarısızlıkla karşılaşma olasılığını artırır. Bu çalışmada Veri Tabanı Yönetim Sistemleri dersine kayıtlı öğrencilerin akademik başarıları ve aşırı güvenleri incelenerek, ara sınav ve final sınavları arasında bu değişkenlerde farklılık olup olmadığı analiz edilmiştir. Katılımcılar, 2021-2022 akademik yılında bir devlet üniversitesinin Bilgisayar ve Öğretim Teknolojileri Eğitimi bölümüne kayıtlı öğrencilerden oluşmaktadır. Sonuçlar, öğrencilerin yaklaşık üçte ikisinin akademik başarı puanlarının yeterli seviyede olmadığını göstermektedir. Öğrenciler, sınav performanslarını doğru bir şekilde değerlendirmekte zorlanmış ve birçoğu ara sınav ve finalde pozisyonlarını abartmıştır. Ayrıca, ara sınav ve final arasında her üç faktörde de anlamlı bir değişiklik olmadığı belirlenmiştir.
Anahtar kelimeler: Akademik Başarı, Bilgisayar ve Öǧretim Teknolojileri Eǧitimi,Öǧretmen Adayları, Veri Tabanı Yönetim Sistemi, Aşırı Tahmin, Abartılı Konumlandırma
INTRODUCTION
Overconfidence is often described as the misalignment of subjective probabilities. Research on overconfidence emphasizes the importance of individuals being aware of both their known and unknown knowledge. It generally assumes that when there's a sensible harmony between one's self-confidence and their actual capabilities, their decision-making tends to be more accurate and of higher quality (Paese & Sniezek, 1991).
Overconfidence often tricks individuals into believing they perform at a higher level than they actually do. Expressions like "ignorant courage,' deeply embedded in Turkish culture for centuries, shed light on this notion through age-old proverbs (Somyiirek ve Celik, 2018). Delving into literature, overconfidence manifests in three distinct guises (Moore & Healy, 2008): overestimation, overplacement, and overprecision. Overestimation involves perceiving one's performance as superior to reality (Moore & Schatz, 2017). This tendency leads individuals to exaggerate the probability of success. Overplacement signifies an inflated belief in superiority over others (Moore & Healy, 2008). The third form, overprecision, denotes excessive trust in the absolute correctness of one's beliefs (Moore et al., 2015).
The literature on overconfidence reveals a common trend: individuals harbor an unwavering and often overly confident belief in their abilities. When someone lacks awareness of their limited expertise in decisionmaking and relies excessively on their judgment, errors in decision-making become inevitable. This tendency complicates setting realistic goals, fosters unfavorable assessments, and increases the likelihood of students facing failure. So, overconfidence poses an obstacle in academic achievement (Bol et al., 2005; Miller & Geraci, 2011). For instance, Hacker et al. (2000) discovered in their study involving ninety-nine undergraduate students that many overestimated their performance in upcoming exams, predicting scores 30% higher than their actual results. While some students tend to overly positively self-assess their performance, others exhibit contrasting behavior.
Erdemir and Somyürek (2023) summarize that various data collection methods are available for assessing overestimation, overplacement, and overprecision, with each method designed to suit specific measurement approaches. The literature underscores that inconsistent measurement of overconfidence often leads to methodological inaccuracies (Olsson, 2014; Schanbacher, 2013). Therefore, the use of precise instruments and analyses, aligned with the research context, is essential for accurately identifying overconfidence.
The Computer and Instructional Technologies Education undergraduate program integrates computer science and educational technology to prepare students with critical skills. A core component of this program, the Database Management Systems course, plays a crucial role in achieving the program's objectives by providing comprehensive knowledge and skills. Douglas and Van Der Vyver (2004) emphasize the importance of this course in information systems undergraduate programs, highlighting its impact on graduates' success. As part of the Science, Technology, Engineering, and Mathematics (STEM) disciplines, courses like Database Management Systems and Programming often exhibit lower performance levels compared to other courses in the department. While experimental studies have examined the effects of instructional materials in e-learning (Douglas & Van Der Vyver, 2004) and mobile learning (Gezgin, 2019) on students' performance in these courses, there is a notable lack of research focusing solely on student performance without such interventions. Investigating this aspect could validate or challenge the observed trends in course performance. Furthermore, the development of appropriate measurement tools and the implementation of quantitative studies could yield reliable, valid, and generalizable assessments of academic achievement, thereby providing a solid foundation for future research findings.
Among the various factors influencing academic achievement, overconfidence emerges as a significant determinant. Research indicates that overconfidence can negatively impact performance across a diverse range of tasks (Erat et al., 2022; Hacker et al., 2008; Mooi, 2006; Nowell & Alston, 2007). While previous studies have extensively explored overconfidence, research specifically focused on the Database Management Systems course has primarily been experimental, examining the effects of open student modeling and social open student modeling on students' overconfidence (Somyürek et al., 2020). However, there is a noticeable paucity of studies that investigate this phenomenon without the influence of interventions. This study aims to address these gaps by examining the academic performance of students enrolled in the Computer and Instructional Technologies Education undergraduate program, particularly in the Database Management Systems course. It seeks to evaluate their performance in midterm and final examinations, as well as their levels of overconfidence. Additionally, the study aims to identify any differences between midterm and final exam outcomes concerning academic achievement and overconfidence.
Within this scope, the research will explore the following questions:
1. What is the distribution of students' academic achievement scores in the midterm and final exams?
2. Is there a significant difference in students' academic achievement scores between the midterm and final exams?
3. What is the distribution of students' overestimation in the midterm and final exams?
4. Is there a significant change in students' overestimation between the midterm and final exams?
5. What is the distribution of students' overplacement in the midterm and final exams?
Is there a significant change in students' overplacement between the midterm and final exams?
METHOD
Research Design
The survey model was used to examine academic achievement scores and overconfidence of students in the midterm and final exams, who enrolled in the Computer and Instructional Technologies Education undergraduate program of the Database Management Systems course. Additionally, a repeated measures design was employed to analyze any discrepancies between the midterm and final academic achievement scores and overconfidence.
Sample
The study group consisted of 17 students, comprising 5 male and 12 female students enrolled in the Computer and Instructional Technologies Education department of an Education Faculty at a state university during the academic year 2021-2022, who enrolled the Database Management Systems course.
Data Collection Tools and Procedure
In this study, we developed two academic achievement tests to assess students' performance and evaluate potential overconfidence in the Database Management System course. The first test consisted of ten multiplechoice questions covering topics taught up to the midterm, while the second test comprised 15 questions. Throughout the test development phase, we actively sought expert opinions to ensure the content validity and appropriateness of the questions. Adjustments were made based on these valuable insights.
Our collaboration with three experts played a crucial role in this process, and to gauge the accuracy with which the items captured the essence of the content domain, we calculated the Content Validity Index (CVI). The CVI, a measure of content validity, is computed through item CVI and total CVI. Item CVI reflects the appropriateness of each item based on experts' assessments, calculated as "Item CVI = Number of Positive Evaluations / Total Number of Experts." Total CVI, an aggregation of item CVIs across the entire test, is calculated as "Total CVI = Sum of Item CVIs / Total Number of Items." Notably, our consistently high Total CVI values for both the midterm and final achievement tests (above 0.80) indicate robust content validity. This suggests that the tests effectively measure the intended content domain, providing a reliable basis for evaluating students' understanding and performance in the Database Management System course.
To ensure that the test is perceived as relevant and appropriate, feedback has been collected for achievement tests developed by the same three experts. As a result of this feedback, it has been concluded that the developed measurement tools also have high face validity. After the test implementation, we conducted a comprehensive assessment of the test items' difficulty and discrimination. The difficulty levels of the items were determined using the Item Difficulty Index, as outlined by Sôzbilir (2010). Items were classified into different categories based on their difficulty index: "Very Difficult" (0.00-0.19), "Difficult" (0.20-0.34), "Moderate" (0.35-0.65), "Easy" (0.650.79), and "Very Easy" (0.80-1.00), providing a nuanced understanding of the items' complexity.
Simultaneously, we employed the evaluation intervals presented by Özçelik (1992) to assess item discrimination. Items with a discrimination value below 0.19 were labeled as "Very Low," those within the range of 0.20-0.29 were categorized as "Needs Revision," items scoring between 0.30 and 0.39 were deemed "Good, Acceptable," and those surpassing 0.40 were characterized as "Very Good, Acceptable." This dual evaluation approach allows for a thorough examination of both difficulty and discrimination levels, providing valuable insights into the overall effectiveness of the test items.
Calculations reveal that the difficulty and discrimination indices for the midterm exam align with those shown in Table 1.
Upon assessing the difficulty levels and discriminative values of the midterm exam questions within these specified evaluation ranges, it was observed that questions 1, 3, and 8 were not suitable. Consequently, these questions were excluded from the achievement test.
A similar rigorous process was applied to ensure the validity and reliability of the academic achievement test used for the final exam. Table 2 showcases the difficulty and discriminative values of the final exam questions. As a result, questions 6, 9, 11, and 14 were excluded from the final assessment.
As a result, the academic achievement test tailored for the midterm comprises 7 questions, while the final exam's assessment includes 11 questions.
In order to determine students' overconfidence, additional questions assessing overestimation and overplacement were incorporated into the academic achievement tests prepared for both the midterm and final exams. Overestimation is characterized by an individual overestimating their actual performance, abilities, or success, displaying overconfidence in their knowledge (Moore & Healy, 2008). In this context, students were asked to predict whether they would correctly answer each question in either the midterm or final exam's achievement test. An illustrative sample question is provided in Figure 1.
Overplacement is characterized by an individual's belief that they outperform others and excel more than their peers (Moore & Healy, 2008). To measure overplacement, at the end of all questions in the academic achievement test for either the midterm or final exam, students were asked about their perceived level of success compared to their peers during the test (Figure 2).
Assess your performance in this exam compared to your classmates.
Subsequently, the same students were administered for the final exam, which included questions aimed at examining students' overestimations in a similar manner. Following this, at the conclusion of the exam, a question related to overplacement was presented to the students.
Data Analysis
To compute the academic achievement test scores of students in both the midterm and final exams, the remaining questions after item analysis were utilized. Maximum and minimum values that can be attained from the exams were calculated, then the students' scores from the midterm and final tests were transformed into standardized scores using the maximum-minimum normalization method. This transformation placed the achievement scores within a range of O to 1. With this method, the lowest possible achievement score (0) was normalized to a value of 0, while the highest possible score (100) was normalized to a value of 1. Scores approaching 1 indicate an increase in achievement, while scores closer to 0 signify a decrease in achievement.
Students' overconfidence was examined through overestimation and overplacement scores. To compute overestimation, a confusion matrix, depicted in Table 3, was initially established. In this matrix, 'a' represents the number of items where the participant confidently indicated they could answer correctly, depicting correct responses, while 'd' represents the number of items where the participant mistakenly believed they could answer correctly, depicting incorrect responses. Hence, 'a' and 'd' scores illustrate the participant's instances of correctly predicting their answers on the exam. 'c' represents the number of items where the participant confidently indicated they could not answer correctly but did so, while 'b' represents the number of items where the participant doubted their ability but answered correctly. In this scenario, 'c' and 'b' scores illustrate the instances where the participant made incorrect predictions.
The accuracy of knowledge monitoring is derived through the following formula based on the scores obtained in the confusion matrix.
* Knowledge Monitoring Assessment (ProQuest: ... denotes formula omited.)
The computed score ranges between -1 and 1. As the score approaches 1, it indicates accurate predictions regarding one's own success, While nearing -1 signifies significantly low awareness regarding their performance in the exam. This score is utilized to evaluate overestimation.
To compute overplacement, students were asked to compare their success relative to other students using a 5-point Likert scale question. To determine the students' real placements, the midterm and final grades were categorized into 5 tiers. Subsequently, the difference between estimated and actual placements was calculated to derive overplacement. Negative values indicate insufficient placement, whereas positive values suggest overplacement.
Overplacement= Estimated Position - Actual Position (Larrick et al., 2007)
In order to answer the first, third, and fifth research questions of the study, descriptive statistics such as frequency and percentage were utilized. Prior to addressing the second, fourth, and sixth research questions, a normality test was conducted, indicating a normal distribution of scores for both the midterm and final exams. However, it was observed that the overestimation scores and overplacement scores for both midterm and final exams did not follow a normal distribution. As a result, the second research question was analyzed using the Ttest for related samples, while the fourth and sixth research questions were scrutinized using the non-parametric Wilcoxon signed-rank test to investigate the change process. The analysis was performed using the SPSS 27.0 software.
Research Ethics
In this study, all ethical procedures have been followed. All participants have been informed about the purpose, process, and ethical rights of the research.
FINDINGS
This section presents the results derived from the analyses conducted in response to the research questions.
Regarding the first research question, the distribution of 'students' midterm and final grades' is illustrated in Figure 3.
Students' midterm scores ranged from 0 to 0.71, while their final scores ranged from 0.09 to 0.91. Four students achieved the highest midterm score of 0.71, whereas one student (S2) scored 0 in the midterm exam. In the final exam, two students attained the highest score of 0.91. Among the participants, eight students scored higher in the final exam than in the midterm, while nine students performed better in the midterm exam compared to the final.
The analyses conducted to address the second research question, "Is there a significant change between students' midterm and final success scores?" are presented in Table 4.
Students' average scores in the midterms were X= 0.436, whereas after the final exam, the average increased to X = 0.448. However, this increase is not statistically significant (р > .05), indicating that there is similarity between midterm and final achievement. Additionally, the mean scores suggest that overall student performance in both midterm and final exams is generally low.
Regarding the third research question, the distribution of students' overestimations in the midterms and finals is presented in Figure 4.
The optimum accuracy of students' knowledge monitoring occurs when their predicted performance matches their actual performance in both the midterm and final exams, resulting in a score of 1. Hence, as scores approach 1, it indicates more accurate predictions. The average accuracy of knowledge monitoring was 0.042 in the midterm and 0.08 in the final exam. These results suggest that students generally failed to accurately assess their level of knowledge. Upon examining the scores in the figure, it's evident that in the midterm, S14 made the most accurate prediction regarding their own success (0.714) and displayed considerable accuracy in their estimation. In the final exam, S1 (0.818) and S10 (0.818) were the students most accurately predicting their own success, signifying their heightened awareness in these respective exams. The students who accurately predicted their success in the final exam also achieved the highest scores. However, this wasn't the case for the midterm. Although the top 4 students in the midterm did not have the highest accuracy scores in knowledge monitoring, the student who followed these four had the highest accuracy score. S3 (-0.429) had the least accurate prediction in the midterm, while S16 (-0.636) had the least accurate prediction in the final exam. Upon reviewing the figure, it's noticeable that almost two-thirds of the students (f=12) had knowledge monitoring accuracy scores that didn't significantly differ between the midterm and final exams. If the knowledge monitoring accuracy score was above 0 in the midterm, it tended to be higher in the final, and vice versa. Additionally, 8 students scored above 0 in the midterm, while 9 students scored above 0 in the final exam.
Analyses examining whether there was a significant change between students' overestimations in the midterm and final exams are provided in Table 5 for the fourth research question.
Table 5 presents the results of the Wilcoxon signed-rank test regarding whether there is a significant change in students' accuracy of knowledge monitoring between the midterm and final exams. The analysis indicates that there is no significant difference between the accuracy of knowledge monitoring scores of students in their midterm and final exams (z=0.18, p>.05). However, considering the mean ranks and totals of the difference scores, it is evident that there are more negative ranks, implying that there is a higher number of students (N=11) whose accuracy of knowledge monitoring in the final exam is lower than in the midterm.
As part of the fifth research question, graphs depicting the distribution of students' overplacement in the midterm and final exams were created based on their estimated positions compared to their actual positions (Figure 5).
Figure 5 shows that 8 students overestimated their predicted positions in the midterm, indicating an overplacement, while 5 students underestimated their positions. Additionally, 4 students accurately predicted their positions, including S4, S9, S14, and S17. It is noticeable that other students showed a discrepancy of either 1 or 2 units between their predicted and actual positions.
When examining Figure 6, it is observed that in the final, the estimated positions of 6 students are higher than the actual positions, indicating an exaggeration in their positioning, while 5 students underestimated their positions, and 5 students accurately estimated their positions. Since S2 did not answer the question about overplacement in the final, data from 16 students were examined. The students who made accurate estimations are $3, $7, $11, S12, and S14. A difference of 1 unit was observed between the estimated positions and actual positions of other students.
A collective analysis of both graphs reveals a fluctuation in differences between estimated and actual positions during the midterm (-2, 1, 0, 1, and 2). However, these differences narrowed down to 1, 0, and -1 in the final. In essence, compared to their peers, participants did not exhibit substantial deviations of -2 or 2 when situating themselves in the final, indicating a more accurate assessment. Furthermore, S1 and S6 consistently misjudged their positions in both the midterm and final exams, whereas S15 and S16 consistently overolaced their positions across both assessments. On the other hand, students S4, S9, and S17 accurately positioned themselves solely in the midterm, while S3, S7, S11, and S12 demonstrated accurate positioning only in the final. Remarkably, S14 was the sole student who correctly positioned themselves in both exams. Detailed outcomes of the Wilcoxon Signed-Rank Test, investigating significant changes between students' overplacement in the midterm and final exams, are presented in Table 6.
Optimal positioning of students would yield a score of 0, meaning the estimated positions in both the midterm and final would match the actual positions. Consequently, as scores approach 0, it indicates a more successful estimation. For the Wilcoxon Signed-Rank Test analysis, absolute differences between participants' estimated and actual positions were considered. This was crucial as a difference score of -2 or 2 equally signifies a 2-unit deviation from the actual position, revealing positive shifts irrespective of whether it was -2 in the midterm and 0 in the final, or 2 in the midterm and 0 in the final.
The results of the Wilcoxon test analysis indicate that there is no significant difference between students' overplacement in the midterm and final exams (z=1.155, p>.05). However, considering the rank sums and means of difference scores, it is observed that there are more negative ranks, indicating a higher number of students with overplacement scores lower in the final than in the midterm. This suggests a favorable improvement towards reduced overplacement. The difference between students' estimated positions in the midterm and final exams compared to their actual positions is presented in Figure 7.
Figure 7 illustrates an interesting trend: while 2-unit exaggeration was present in 4 students during the midterm, it completely vanished in the final assessments. The most frequent disparity observed consistently across both midterm and final exams was a 1-unit difference. Moreover, it's worth noting that 5 students accurately positioned themselves in the final exam, whereas 3 students did so during the midterm.
DISCUSSION & CONCLUSION
This study delved into the academic performance and overconfidence of undergraduate students enrolled in the Computer and Instructional Technologies Education program. It specifically scrutinized their performance in the Database Management Systems course during both the midterm and final exams, aiming to explore potential differences in academic achievement and overconfidence levels between the two assessments. It's essential to note that the study employed a convenient sampling approach to select second-year students of Computer and Instructional Technologies Education at a state university. While convenient sampling can be practical and efficient, it may introduce selection bias, limiting the generalizability of the findings to a broader population. The use of a non-randomized sampling method might result in a sample that does not fully represent the diversity of students in similar programs at different institutions.
Additionally, this sampling approach may not account for variations in student backgrounds, prior knowledge, or learning styles, which could impact the external validity of the study. It is crucial to recognize that the specific characteristics of the selected sample may influence the study's outcomes and limit the extent to which the findings can be applied to a more diverse student population.
To overcome this limitation in future research, it is recommended to adopt a more robust sampling strategy, such as random sampling. This approach would not only contribute to a more rigorous study design but also bolster the generalizability of the results. By incorporating random sampling, researchers can ensure a more representative selection of participants, thus offering a broader and more comprehensive understanding of the intricate relationship between academic performance, overconfidence, and the structure of courses in Database Management Systems education. Findings of the study revealed a range in students' academic scores, from 0 to 0.71 during the midterm and 0.09 to 0.91 in the final exam. Notably, roughly a third of the students (N=4) achieved scores of 0.7 and above in both exams, indicating a notable level of success. However, concerning performance, 9 students scored below 0.4 in the final exam, while 6 students did so in the midterm, signaling a lower academic performance level. Scores for other students ranged between 0.4 and 0.7. The class's average score was 0.436 for the midterm and slightly increased to 0.449 in the final exam. These results suggest a generally low average academic achievement for the class, with approximately two-thirds falling short of a satisfactory level. Furthermore, there was no statistically crucial difference found between students' academic performance in the midterm and final exams, indicating a parallel performance level between the two assessments.
Tailoring database systems to meet the ever-evolving market demands is a complex endeavor, calling for the collaborative efforts of experts from diverse backgrounds (Posci et al., 2012). Given the pivotal role that database systems play in the successful implementation of information systems (Etemad & Kiipci, 2018; Morien, 2006), it's not surprising that virtually all computer-related programs necessitate at least one course dedicated to database systems (Nagataki et al., 2013). However, the teaching of database analysis and design proves challenging due to its abstract and intricate nature (Connolly & Begg, 2007; Murray & Guimaraes, 2009). Studies consistently report that many students struggle to grasp the fundamental concepts of databases (Hamzah et al., 2019). Notably, Folorunso and Akinwale (2010) highlighted a significant deficiency in students' performance in SQL, an integral part of database courses, indicating a lack of understanding about its significance. In essence, the outcomes of our study echo the findings across international literature, shedding light on students' academic achievements in database management systems courses.
Students' low academic achievement scores are believed to be influenced by the structure of the course. The Database Management Systems course is designed to encompass both theoretical and hands-on elements, aiming to instill the following proficiencies in students:
- Defining the core concepts of databases and database management systems.
- Elucidating the functionalities inherent in database management systems.
-Executing the sequential stages of database creation (encompassing requirement analysis, conceptual, logical, and physical modeling).
- Proficiency in querying, modifying, adding, and removing data using the SQL language.
- Competence in establishing and administrating databases within a database management system.
Students engaging in the process of designing a database for an information system must adeptly employ problem-solving and analytical skills during the requirement analysis phase and subsequent conceptual modeling through the creation of entity-relationship diagrams. Logical modeling marks the stage where data relationships, structures, and constraints within the database are defined, governed by specific rules and principles crucially applied across varied contexts. Meanwhile, physical modeling involves the creation of databases using specific management systems such as MSSQL, MySQL, Oracle, enabling querying, implementing database structures like triggers, stored procedures, and indexes to ensure data integrity, performance optimization, and security enhancement. Within the curriculum, students delve into Structural Query Language (SQL) to craft databases, execute queries, and construct these essential database structures. Crafting effective SQL queries necessitates a deep understanding of command functionalities, coupled with the ability to employ these commands in the right sequence and context. Ultimately, the Database Management Systems course offers a comprehensive fusion of theoretical knowledge and hands-on applications in database design, querying, and management, encompassing a broad spectrum of conceptual, operational, and high-level skills. The complexity of learning arises from the demand to employ these multifaceted skills, resulting in increased effort from students.
Moreover, Database Management Systems run parallel processes to Introduction to Programming/Computer Science courses, equipping students with analogous proficiencies. For example, processes in requirement analysis and conceptual modeling mirror those in analysis and design stages in programming. A meticulous and error-free execution of logical modeling becomes pivotal for program-database interaction and precise data processing. Programmers must adeptly wield SQL to facilitate seamless interaction between their applications and databases. Hence, the Database Management Systems course and programming courses share a direct connection, covering realms of analysis, database creation, and SQL query writing. Additionally, both courses integrate akin cognitive and metacognitive processes and foster analytical, algorithmic, and problem-solving skills. Considering these interconnected aspects, it's obvious that these courses are intricately entwined. A review of studies on Introduction to Programming courses in global literature reveals a recurrent trend of low performance (Bennedsen & Caspersen, 2007; Watson & Li, 2014). Watson and Li's (2014), meta-analysis consolidating fifty years of research on Introduction to Programming courses unveils a consistent global failure rate (33.3%). Moderator analysis highlights minor variations in pass rates influenced by class level, country, and class size.
Drawing from the insights gained findings related to academic success, a beneficial recommendation for educational enhancements emerges. Offering supplementary resources and activities to assist students facing challenges can play a pivotal role in closing the academic performance gap. Furthermore, integrating practical scenarios and real-world applications into lessons is considered advantageous for refining students' proficiency in crafting effective SQL queries.
In this study, we delved into not just the academic achievements of students but also their overconfidence in exam performance. Overconfidence occurs when an individual believes in their knowledge and abilities to a greater extent than they truly possess It can take forms such as overestimation, overplacement and overprecision (Moore & Healy, 2008). Our aim was to explore students' overestimation and overplacement. To analyze students' overestimation, we employed knowledge monitoring accuracy scores, a widely used method by Tobias and Everson (2002), that measures the gap between students' actual performance and their perceived confidence levels. Those who accurately evaluate their knowledge are thought to excel in filling gaps, staying updated, and adapting to new scenarios. Therefore, this trait, closely linked to overconfidence, is considered both a metacognitive skill and an influence on the learning process. For students, an optimal knowledge monitoring accuracy score of 1 indicates a perfect alignment between their anticipated and actual performance in both midterm and final exams. Conversely, a score of -1 implies no correlation between predicted and actual outcomes. On average, students scored 0.042 in knowledge monitoring accuracy for the midterm and 0.08 for the final exam. These findings indicate that most students struggled to accurately gauge their exam performances. As expected, those who best predicted their final exam success were the highest achievers. However, among the top scorers in the midterm, the student with the second-highest grade displayed the best knowledge monitoring accuracy. Several researchers noted that individuals with lesser abilities tend to exhibit more overconfidence (Miller & Gerraci, 2011; Kruger & Dunning, 1999). This suggests that as proficiency grows, overconfidence diminishes and accuracy in selfassessment increases. The observation that the highest achievers were also the most accurate predictors supports this notion.
In exploring our fourth research question concerning significant differences in students' knowledge monitoring accuracy scores between the midterm and final exams, our findings from the Wilcoxon signed-rank test indicated no statistically significant difference (z=0.18, p>.05). This suggests that relying solely on students to predict their performance question by question may fall short in effectively boosting their awareness. However, it is important to note that while the statistical analysis did not reveal a significant change, we acknowledge the need for a nuanced interpretation of the results.
Upon closer examination of the mean ranks and totals of the difference scores, we observed a trend wherein more negative ranks were prevalent in the final exam. This suggests that a relatively larger group of students (N=11) exhibited lower accuracy of knowledge monitoring in the final exam compared to the midterm. Despite the lack of statistical significance, this observation encourages a deeper exploration of the potential implications of these variations, indicating trends in students" knowledge monitoring accuracy that require careful consideration. Future studies could explore these patterns more comprehensively to contribute to our understanding of students' awareness of their own knowledge.
It is also noteworthy that approximately two-thirds of the students (n=12) demonstrated consistent knowledge monitoring accuracy scores between the midterm and final exams. Further analysis revealed interesting patterns - if a student had a positive knowledge monitoring accuracy score in the midterm, they tended to maintain a higher score in the final exam, and vice versa. A positive score, in this context, signifies a higher proportion of accurate estimations compared to inaccuracies, even if the individual isn't entirely sure about the correctness of specific answers. Conversely, a negative score indicates less than 50% awareness of the correct answers.
In light of the findings on overconfidence, it is recommended that incorporating activities or assessments focused on enhancing students' self-assessment skills into the curriculum would be beneficial for educational improvements. Moreover, implementing feedback mechanisms to assist students in aligning their perceived confidence with their actual performance could significantly contribute to fostering awareness.
The fifth segment of the research delved into the peculiarities of students' overplacement discrepancies between their midterm and final assessments. There are more students whose estimated positions are higher than their actual standings in both midterm and final exams. However, an intriguing shift emerged-there was a noteworthy uptick in students accurately predicting their positions for the final exam, accompanied by a decline in overestimation tendencies. Interestingly, nearly a third of the students demonstrated a marked improvement in their ability to predict their class rank during the final evaluation compared to their earlier predictions in the midterm. Nevertheless, the investigation concluded that there existed no significant discrepancy between students" overplacement tendencies from the midterm to the final assessments. Yet, upon deeper analysis, it became evident that a predominant number of discrepancy scores leaned towards the negative end. This pointed to a scenario where more students in the final assessment had overplacement scores lower than those observed during the midterm. This shift implies a positive trajectory in reducing overestimation tendencies. Intriguingly, the absence of any instances where students exhibited a two-unit overplacement in the final assessment, a phenomenon observed in four students during the midterm, lent further support to this observation. The timing of grade releases post-midterm might have provided students a clearer perspective to realistically gauge their performance compared to their peers, contributing to a decrease in overplacement tendencies in the final assessment. Studies in educational psychology have revealed that when students are able to compare their progress with that of their classmates, it tends to refine their self-evaluations regarding their class standings (Somyiirek and Brusilovsky, 2015). This echoes the findings from existing literature in this domain.
The disclosure of midterm grades, albeit to a small extent, is seen as having the potential to slightly enhance students' awareness of their class standings, leading to more realistic self-assessments and potentially reducing tendencies for overplacement. Providing students with comparative feedback on their performance relative to their peers in educational settings is considered a valuable step in addressing the challenge of overplacement.
To assist students exhibiting high overconfidence and low academic success in Database Management Course, future research should delve deeper into the underlying reasons for this phenomenon. Moreover, exploring how success and overconfidence are influenced by factors, whether they are independent of or dependent on the subject matter-such as learners' personality traits and achievement goal orientations-, is considered valuable. Developing intervention strategies based on these insights is also deemed beneficial.
Statements of Publication Ethics
In this study, the principles of publication ethics have been adhered to, and the ethical permission for the research has been approved by the Ethics Committee of Ankara Gazi University Institute of Educational Sciences with the document number E-77082166-302.08.01-664509 on May 26, 2023.
Researchers' Contribution Rate
All authors have equally contributed to this work.
Conflict of Interest
There is no conflict of interest in this study.
Sidebar
References
REFERENCES
Bennedsen, J., and Caspersen, M. E. (2007) Failure rates in introductory programming. SIGCSE Bulletin, 39(2):32{36, 2007.
Bol, L., Hacker, D. J., O'Shea, P., & Allen, D. (2005). The influence of overt practice, achievement level, and explanatory style on calibration accuracy and performance. The Journal of Experimental Education, 73(4), 269-290.
Connolly, T. M., & Begg, C. Е. (2007). Teaching database analysis and design in a web-based vonstructivist learning environment. In Web Information Systems and Technologies: International Conferences, WEBIST 2005 and WEBIST 2006 Revised Selected Papers (pp. 343-354). Springer Berlin Heidelberg.
Douglas, D. E., & Van Der Vyver, G. (2004). Effectiveness of e-learning course materials for learning database management systems: An experimental investigation. Journal of Computer Information Systems, 44(4), 4148.
Erat, S., Demirkol, K., & Sallabas, M. E. (2022). Overconfidence and its link with feedback. Active Learning in Higher Education, 23(3), 173-187. https://doi.org/10.1177/1469787420981731
Erdemir, T., & Somyürek, $. (2023). Overconfidence and Measurement Methods: Literature Review. Trakya Journal of Education. 13(2).1402-1420.
Etemad, M., € Küpçü, A. (2018). Verifiable database outsourcing supporting join. Journal of Network and Computer Applications, 115, 1-19. https://doi.org/10.1016/j.jnca.2018.04.006
Gezgin, D. M. (2019). The effect of mobile learning approach on university students' academic success for database management systems course. International Journal of Distance Education Technologies (IJDET), 17(1), 15-30. https://doi.org/10.4018/1JDET.2019010102
Hacker, D. J., Bol, L., Horgan, D. D., & Rakow, E. A. (2000). Test prediction and performance in a classroom context. Journal of Educational Psychology, 92(1), 160. https://doi.org/10.1037/0022-0663.92.1.160
Hacker, D. J., Bol, L., & Keener, M. C. (2008). Metacognition in education: A focus on calibration. In J. Dunlosky & R. A. Bjork (Eds.), Handbook of metamemory and memory (pp. 429-455). Psychology Press.
Hamzah, M. L., Rukun, K., Rizal, F., & Purwati, A. A. (2019). A review of increasing teaching and learning database subjects in computer science. Revista Espacios, 40(26).
Keller, J. (2002). Blatant stereotype threat and women's math performance: Self-handicapping as a strategic means to cope With obtrusive negative performance expectations. Sex Roles, 47(3-4), 193-198.
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of personality and social psychology, 77(6), 1121.
Lai Mooi, T. (2006). Self-efficacy and student performance in an accounting course. Journal of Financial Reporting and Accounting, 4(1), 129-146. https://doi.org/10.1108/19852510680001586
Larrick, K. P., Burson, K. A., & Soll, J. В. (2007). Social comparison and confidence: When thinking you're better than average predicts overconfidence (and when it does not). Organizational Behavior and Human Decision Processes, 102(1), 76-94. https://do1.org/10.1016/j.obhdp.2006. 10.002
Miller, T. M., € Geraci, L. (2011). Unskilled but aware: Reinterpreting overconfidence in low-performing students. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(2), 502-506. https://doi.org/10.1037/a0021802
Moore, D. A., & Healy, P. J. (2008). The trouble with overconfidence. Psychological review, 115(2), 502-517. https://doi.org/10.1037/0033-295X.115.2.502
Moore, D. A., Tenney, E. R., & Haran, U. (2015). Overprecision in judgment. In C. Keren & C. Wu (Eds.), The Wiley Blackwell handbook of judgment and decision making, 2, 182-209.
Moore, D. A., & Schatz, D. (2017). The three faces of overconfidence. Social and Personality Psychology Compass, 11(8), e12331. https://doi.org/10.1111/spc3.12331
Morien, R. I. (2006). A Critical Evaluation Database Textbooks, Curriculum and Educational Outcomes. Director, 7.
Murray, M., & Guimaraes, M. (2009). Animated courseware support for teaching database design. Issues in Informing Science and Information Technology, 6,201-211. https://doi.org/10.28945/1053
Nagataki, H., Nakano, Y., Nobe, M., Tohyama, T., & Kanemune, S. (2013, November). A visual learning tool for database operation. In Proceedings of the 8th Workshop in Primary and Secondary Computing Education (pp. 39-40). https://doi.org/10.1145/2532748.2532771
Nowell, C., & Alston, K. M. (2007). I thought I got an A! Overconfidence across the economics curriculum. The Journal of Economic Education, 38(2), 131-142. https://doi.org/10.3200/JECE.38.2.131-142
Olsson, H. (2014). Measuring overconfidence: Methodological problems and statistical artifacts. Journal of Business Research, 67(8), 1766-1770. https://doi.org/10.1016/j.jbusres.2014.03.002
Özçelik, D. A. (1992). Ölçme ve deǧerlendirme. [Assessment and evaluation ] Ankara: ÖSYM.
Paese, P. W., & Sniezek, J. A. (1991). Influences on the appropriateness of confidence in judgment: Practice, effort, information, and decision-making. Organizational Behavior and Human Decision Processes, 48(1), 100-130. https://doi.org/10.1016/0749-5978(91)90008-H
PosStic, P., Subotié, D., & Ivasié-Kos, M. (2012, May). Developing the course Database systems to respond to market requirements. In 2012 Proceedings of the 35th International Convention MIPRO (pp. 1141-1145). IEEE.
Schanbacher, P. (2013). Is the log score in line with forecasters' preferences?. International Journal of Applied Decision Sciences, 6(4), 406-430. https://doi.org/10.1504/IJADS.2013.056882
Steele, C. M., Spencer, S. J., & Aronson, J. (2002). Contending with group image: The psychology of stereotype and social identity threat. In Advances in experimental social psychology (Vol. 34, pp. 379-440). Academic Press. https://doi.org/10.1016/S0065-2601(02)80009-0
Somyürek, S., & Brusilovsky, P. (2015, October). Impact of open social student modeling on self-assessment of performance. In E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 1181-1188). Association for the Advancement of Computing in Education (AACE).
Somyürek, S., Brusilovsky, P., & Guerra, J. (2020). Supporting knowledge monitoring ability: open learner modeling vs. open social learner modeling. Research and Practice in Technology Enhanced Learning, 15(1), 1-24.
Somyiirek, S., & Celik, 1. (2018). Dunning-Kruger sendromu ve óznel degerlendirmeler. Egitim Teknolojisi Kuram ve Uygulama, 8(1), 141-157.
Sôzbilir, M. (2010). Madde analizi ve test geliştirme. [Content analysis and test development].
Tobias, S., & Everson, H. T. (2002). Knowing what you know and what you don't: Further research on metacognitive knowledge monitoring. Research Report No. 2002-3. College Entrance Examination Board.
Watson, C., & Li, Е. W. (2014, June). Failure rates in introductory programming revisited. In Proceedings of the 2014 conference on Innovation & technology in computer science education (pp. 39-44). https://doi.org/10.1145/2591708.2591749.