Content area
Full text
Advances in Knowledge
* - Designing adequate multiple choice questions (MCQs) is essential to assess learning among medical students. Item analysis is an important scientific tool that provides information about the reliability and validity of MCQ items. However, item analysis studies are limited, particularly in medical schools in Arabian Gulf countries.
* - The findings of the current study will hopefully increase awareness of this measurement tool among medical education providers in the region.
Application to Patient Care
* - Designing appropriate MCQs improves the assessment and learning output of medical students. High-quality medical education in the Arabian Gulf region will encourage the provision of enhanced healthcare services to local populations.
While assessment is an essential part of student learning, assessment tools need to be valid, reliable and objective and reflect various achievement levels. Multiple choice questions (MCQs) should not only aim to assess knowledge recollection, but also measure other teaching objectives within Bloom’s taxonomy of learning, such as comprehension, application, analysis, synthesis and evaluation.1 Constructing a high-quality MCQ examination can be difficult and time-consuming; however, this approach is usually preferential to other types of assessment tools because it is objective and leaves little room for human bias, as answers to MCQ questions can be easily and reliably scored.2,3 In recent years, the most common type of MCQs employed in examinations are type A MCQs, which consist of a stem followed by four or five options or distractors.4,5
An item analysis assesses the reliability and validity of an examination by examining student performance with regards to each MCQ and applying statistical analyses to determine whether the item should be kept, reviewed or discarded from the test. Common item analysis parameters include the difficulty index (DIFI), which reflects the percentage of correct answers to total responses; the discrimination index (DI), also known as the point biserial correlation, which identifies discrimination between students with different levels of achievement; and distractor efficiency (DE), which indicates whether the distractors in the item are well-chosen or have failed to distract students from selecting the correct answer. An ideal item should have a DIFI of between 30–70%, a DI of >0.2 and a DE of 100%.6,7
At the end of their 10-week clinical rotation in the Department...





