Content area
Full text
Developing an evidence base for making public health decisions will require using data from evaluation studies with randomized and nonrandomized designs. Assessing individual studies and using studies in quantitative research syntheses require transparent reporting of the study, with sufficient detail and clarity to readily see differences and similarities among studies in the same area. The Consolidated Standards of Reporting Trials (CONSORT) statement provides guidelines for transparent reporting of randomized clinical trials.
We present the initial version of the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) statement. These guidelines emphasize the reporting of theories used and descriptions of intervention and comparison conditions, research design, and methods of adjusting for possible biases in evaluation studies that use nonrandomized designs. (Am J Public Health. 2004;94:361-366)
OVER THE PAST SEVERAL decades, a strong movement toward evidence-based medicine has emerged.1-3 In the context of evidence-based medicine, clinical decisions are based on the best available scientific data rather than on customary practices or the personal beliefs of the health care provider. There is now a parallel movement toward evidence-based public health practices.4,5 The movement is intended to utilize the best available scientific knowledge as the foundation for public health-related decisionmaking.
In the context of evidence-based medicine, the randomized controlled trial (RCT) is usually considered of greatest evidentiary value for assessing the efficacy of interventions. Indeed, the preference for this design is sufficiently strong that when empirical evidence from RCTs is available, "weaker" designs are often considered to be of little or no evidentiary value. In this issue, Victora et al.6 make a strong argument that evidence-based public health will necessarily involve the use of research designs other than RCTs. Most important, they argue that RCTs are often not practical or not ethical for evaluating many public health interventions and discuss methods for drawing causal inferences from nonrandomized evaluation designs ("plausibility" and "adequacy" designs in their terminology).
Also in this issue, Donner and Klar,7 Murray et al.,8 and Varnell et al.9 provide overviews of the benefits and pitfalls of the group-randomized trial, which, in some situations, may be a reasonable alternative to the RCT. There are also a wide variety of nonrandomized evaluation designs that can contribute important data on the efficacy or effectiveness of interventions, such as quasi-experimental designs,10...





