As Bagust and McEwan (1) indicate, diabetes models are being used for a variety of purposes, such as patient management, public health policy, and economic evaluation. Potential “end-users” of the results from modeling studies turn to this technology largely because evidence from empirical studies is lacking and cannot be easily or quickly obtained .

As our consensus statement points out, if the results from modeling studies are to be believed, the end-user must have confidence that the model itself “accurately and reliably” represents the real world, in which the subsequent modeling study will take place. That is, do the components of the model (equations/formulas) truly capture what we know about the real world? A user will have confidence that this critical assumption is true when it has been shown that the model reproduces the studies used to construct the equations (i.e., internal validation) and that the model can replicate the results of studies that were not used to build the model (i.e., external validation).

Without having such validations, users should be very skeptical of the results of a modeling study. Indeed, users would be better off knowing that a model has been extensively validated even before collaborating on or commissioning a project.

Bagust and McEwan offer no alternative approach to engender believability, likely because there is none. Downplaying the enormous value of model validation may mislead users into thinking that all models are the same, when they are far from it.

In addition, Bagust and McEwan contend that models are incapable of generating accurate predictions of long-term clinical outcomes and costs. In fact, the very purpose of external validation is to prove that a model is capable of doing just that, as well as to improve the believability of the model’s results when there are no real-life data for comparison. Readers should keep in mind that even the results of well-controlled randomized trials do not precisely depict reality, but they are the best models we have. “Real” patients often do not meet all inclusion and exclusion criteria, or in practice receive all of the health care given in a trial, or visit doctors with the same enthusiasm and commitment as trial participants.

The ultimate purpose of the Guidelines (2) is to encourage clinicians and policy makers to more readily incorporate the results of a model in decision making. If hundreds of thousands of people every day stake their lives on the accuracy and reliability of validated mathematical models (e.g., by flying on an airplane), surely clinicians and policy makers can depend on models as a “level of evidence” well above many others (e.g., expert opinion). But to do so first requires modelers to demonstrate the accuracy and reliability of their approach. The Guidelines help lay the framework toward achieving that confidence.

1
Bagust A, McEwan P: Guidelines for computer modeling of diabetes and its complications (Letter).
Diabetes Care
28
:
500
,
2005
2
American Diabetes Association Consensus panel: Guidelines for computer modeling of diabetes and its complications (Consensus Statement).
Diabetes Care
27
:
2262
–2265,
2004