Content area
Background
The quality of assessment in undergraduate medical colleges remains underexplored, particularly concerning the availability of validated instruments for its measurement. Bridging the gap between established assessment standards and their practical application is crucial for improving educational outcomes. To address this, the ‘Assessment Implementation Measure’ (AIM) tool was designed to evaluate the perception of assessment quality among undergraduate medical faculty members. While the content validity of the AIM questionnaire has been established, limitations in sample size have precluded the determination of construct validity and a statistically defined cutoff score.
Objective
To establish the construct validity of the Assessment Implementation Measure (AIM) tool. To determine the cutoff scores of the AIM tool and its domains statistically for classifying assessment implementation quality.
Methods
This study employed a cross-sectional validation design to establish the construct validity and a statistically valid cutoff score for the AIM tool to accurately classify the quality of assessment implementation as either high or low. A sample size of 347 undergraduate medical faculty members was used for this purpose. The construct validity of the AIM tool was established through exploratory factor analysis (EFA), reliability was confirmed via Cronbach's alpha, and cutoff scores were calculated via the receiver operating characteristic curve (ROC).
Results
EFA of the AIM tool revealed seven factors accounting for 63.961% of the total variance. One item was removed, resulting in 29 items with factor loadings above 0.40. The tool’s reliability was excellent (0.930), and the seven domains ranged from 0.719 to 0.859; however, the ‘Ensuring Fair Assessment’ domain demonstrated a weak Cronbach’s alpha of 0.570. The cutoff score for differentiating high and low assessment quality was calculated as 77 out of 116 using the ROC curve. The scores for the seven domains ranged from 5.5 to 18.5. The tool's area under the curve (AUC) was 0.994, and for the seven factors, it ranged from 0.701 to 0.924.
Conclusion
The validated AIM tool and statistically established cutoff score provide a standardized measure for institutions to evaluate and improve their assessment programs. EFA factor analysis grouped 29 of the 30 items into 7 factors, demonstrating good construct validity. The tool demonstrated good reliability via Cronbach’s alpha, and a cutoff score of 77 was calculated through ROC curve analysis. This tool can guide faculty development initiatives and support quality assurance processes in medical schools.
Details
Cutting Scores;
Indexes;
Educational Practices;
Sampling;
Educational Development;
Construct Validity;
Graduates;
Factor Analysis;
Stakeholders;
Data Collection;
Sample Size;
College Faculty;
Medical Education;
Program Evaluation;
Medical Evaluation;
Factor Structure;
Accountability;
Faculty Development;
Educational Objectives;
Instructional Effectiveness;
Data Analysis;
Educational Assessment;
Outcomes of Education;
Content Validity