Abstract

The reliability and validity of professionally written multiple-choice exams have been extensively studied for exams such as the SAT, graduate record examination, and the force concept inventory. Much of the success of these multiple-choice exams is attributed to the careful construction of each question, as well as each response. In this study, the reliability and validity of scores from multiple-choice exams written for and administered in the large introductory physics courses at the University of Illinois, Urbana-Champaign were investigated. The reliability of exam scores over the course of a semester results in approximately a 3% uncertainty in students’ total semester exam score. This semester test score uncertainty yields an uncertainty in the students’ assigned letter grade that is less than 13 of a letter grade. To study the validity of exam scores, a subset of students were ranked independently based on their multiple-choice score, graded explanations, and student interviews. The ranking of these students based on their multiple-choice score was found to be consistent with the ranking assigned by physics instructors based on the students’ written explanations (r>0.94 at the 95% confidence level) and oral interviews (r=0.94−0.09+0.06) .

Details

Title
Evaluating multiple-choice exams in large introductory physics courses
Author
Scott, Michael; Stelzer, Tim; Gladding, Gary
Section
ARTICLES
Publication year
2006
Publication date
Jul-Dec 2006
Publisher
American Physical Society
e-ISSN
15549178
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2550488892
Copyright
© 2006. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.