Content area
Full Text
ABSTRACT
Doctors, patients, and other decision makers need access to the best available clinical evidence, which can come from systematic reviews, experimental trials, and observational research. Despite methodological challenges, high-quality observational studies have an important role in comparative effectiveness research because they can address issues that are otherwise difficult or impossible to study. In addition, many clinical and policy decisions do not require the very high levels of certainty provided by large, rigorous randomized trials. This paper provides insights and a framework to guide good decision making that involves the full range of high-quality comparative effectiveness research techniques, including observational research.
Comparative effectiveness has been defined as "the conduct and synthesis of research comparing the benefits and harms of different interventions and strategies to prevent, diagnose, treat and monitor health conditions in 'real world' settings."1 The Patient Protection and Affordable Care Act of 2010 established a new center, the Patient-Centered Outcomes Research Institute, for comparative effectiveness research. It also called for the creation of a methods committee to "develop and improve the science and methods of comparative clinical effectiveness research."2
One task of the methods committee will be to ensure that these types of studies use rigorous methodologies. Because a good understanding of comparative effectiveness will depend on a range of research methods, the quality of health care decisions emanating from the studies will reflect the quality of their design, implementation, and reporting.3
Types Of Research Methods
There are a number of methods for conducting comparative effectiveness research. These include systematic reviews of existing evidence and meta-analyses (statistical pooling or other syntheses of the results of multiple studies); experimental studies, such as randomized controlled trials and pragmatic clinical trials that randomly assign interventions; and nonexperimental studies, including retrospective and prospective observational studies that do not assign interventions but leave the choice of treatments up to patients and their health care providers.
A systematic review is a comprehensive review and integration of an existing body of evidence, which may, but does not always, include a metaanalysis. Randomized controlled trials typically enroll a homogeneous population of patients and rigorously monitor their progress. In contrast, pragmatic clinical trials generally enroll a wider range of patients and monitor their progress as they would in routine, real-world clinical practice.4