Content area
Full text
High-stakes testing is a "failed policy initiative" that does not produce gains on other measures of student learning, researchers at Arizona State University in Tempe argue in a recent paper.
"High-Stakes Testing, Uncertainty, and Student Learning," by Audrey L. Amrein and David C. Berliner, appears in last month's edition of the online scholarly journal Education Policy Analysis Archives.
It examines data from 18 states that attach high stakes to their test results. Such states, for example, use test scores to determine promotion from one grade to the next, graduation from high school, rewards for high-performing schools, and consequences for low-performing ones.
To see whether states that adopted high-stakes practices showed gains on other measures of student learning, the researchers conducted a "timeseries analysis" in which they looked at scores obtained over two decades from four separate standardized tests. In particular, they examined changes in three college-admissions or -placement tests-the SAT, the ACT, and the Advanced Placement exams-- and the National Assessment of Educational Progress.
The researchers examined changes in sAT scores from 1977 to 2001, in ACT scores from 1980 to 2001, in AP scores from 1995 to 2000, and in NAEP reading and math scores from 1990 to 2000. For each state, they looked at whether those scores rose or fell in the years after the state required the first high school class to pass an exam to graduate, by analyzing short-term, longterm, and overall achievement trends.
"Analyses of these data reveal that if the intended.goal of highstakes-testing policy is to increase student learning, then that policy is not working," the authors conclude. "While a state's high-stakes test may show increased scores, there is little support in these data that such increases are anything but the result of test preparation and/or the exclusion of...





