Content area
Full Text
Risk prediction models have great potential to support clinical decision making and are increasingly incorporated into clinical guidelines.1 Many prediction models have been developed for cardiovascular disease—the Framingham risk score, SCORE, QRISK, and the Reynolds risk score—to mention just a few. With so many prediction models for similar outcomes or target populations, clinicians have to decide which model should be used on their patients. To make this decision they need to know, as a minimum, how well the score predicts disease in people outside the populations used to develop the model (“what is the external validation?”) and which model performs best.2
In a linked research study (doi:
Firstly, direct comparisons are few. A plea for more direct comparisons is increasingly heard in the field of therapeutic intervention and diagnostic research and may be echoed in that of prediction model validation studies. Many more prediction models have been developed than have been validated in independent datasets. Moreover, few models developed for similar outcomes and target populations are directly validated and compared.2 The authors of the current study retrieved various validation studies, but only 20 studies evaluated more than one model and most of those compared just two models. Thus, readers still need to judge from indirect comparisons which of the available models provide the best predictors in different situations. It would be much more informative if investigators who have (large) datasets available were to validate and compare all existing models together. And it would be even better if they first conducted and reported a systematic review of existing models...