“It seems so simple,” he interrupted. We were at a diabetes quality improvement round-table. I had just enumerated four common steps in quality improvement initiatives: 1) identify a population of interest; 2) determine what parameter you want to improve and estimate the baseline level of that parameter in the identified population; 3) determine threshold, target, and reach goals for that parameter (e.g., “We will try to achieve threshold, target, and reach goals of 10, 20, and 30% improvement, respectively, in cholesterol control for individuals with type 2 diabetes seen in our clinic”); and 4) design and implement “something” that would lead to the desired improvement. “I have to believe it’s more complicated than that; otherwise, we would have improved diabetes care,” he quipped.

Well, it isn’t. And it is. In some ways, quality improvement resembles weight loss: conceptually clear, yet practically problematic. It is clear what needs to be done; it’s just so difficult to execute. In fact, it’s even harder than weight loss. Although it is clear what needs to be done at a high level, it is rarely clear what needs to be done at a granular level. And the challenge for quality improvement is much like that for weight loss: the context is not conducive.

Before considering the practical challenges of each of the aforementioned steps, it is useful to start with the problem of why. Why would a clinician pursue improvement? Quality improvement, currently conceptualized as initiatives to enhance adherence to evidence-based care, is work. And, it is work that is not well rewarded. Patients and their referring physicians are more attuned to physician affability, availability, and affordability. Judging adherence to evidence-based care is difficult. Indeed, even if one knew how we currently assess quality, it is hard to believe that my next patient will think more or less of me based on the percentage of my patients who receive a flu shot. Either we need to fundamentally revisit our concept of quality improvement or we need to restructure our system of care to reward quality improvement. At the very least, we should not ask physicians to do more for less.

Yet, one ought not to believe that if only we aligned incentives, changed systems, and permeated doctors’ offices with information technology, then quality improvement would be simple. Consider the four steps above.

First, one has to identify a population. Clearly, this step is infinitely simpler with a robust electronic health record or some information technology platform that affords clinicians the ability to characterize their patients based on a variety of attributes including disease state. Because this resource exists for a relatively small part of our population, this is currently a formidable barrier.

Even when this technology is available, there is still the dilemma of deciding on the appropriate population, and this has nothing to do with whether the population is going to be “static” or include new patients who enter the practice after the initiative starts.

More fundamental is the reality that clinical trials have exclusion and inclusion criteria. If we are to better adhere to the evidence, we should only apply our improvement initiatives to patients who are similar to the participants in the clinical trials. This presents a practical problem that is not easily overcome; some of the attributes of the clinical trial participants, which were included in the study reports, are rarely available in electronic health records. Just as we need to deliver evidence-based care, we must also be careful not to extrapolate evidence to subgroups for which the data do not apply.

Second, one must identify a parameter of interest. Again, this requires some ability to abstract and synthesize data in a manner that is efficient—please, no manual chart abstractions. Although this step is probably the simplest of the four steps above, it is also the area in which unintended consequences should be considered most carefully. With diabetes in particular, there are many parameters that are considered in addition to glucose control. Whether one should consider one parameter or a more comprehensive set is debatable. In addition, what happens when there is an emphasis on one parameter? Do other parameters suffer? Let us not improve glucose control at the expense of colorectal cancer screening.

Third, one must set goals. In our institution, we classify our goals as threshold, target, and reach levels, referring to progressively significant achievements. We, like nearly everyone I know, review the literature to see the level of a particular biomarker, such as A1C, that was associated with salutary outcomes. We then set goals for our population at that level. For example, if A1C levels < 7% are deemed desirable, we might say that our threshold, target, and reach goals for that level might be 50, 60, and 70% of the population, respectively, if the baseline was 40%.

There is a problem here, however. If A1C levels < 7% were associated with salutary outcomes, it was the mean A1C in that clinical trial. By taking summary level estimates from a clinical trial and successfully applying them to individuals in practice, one might reduce the A1C in the population of interest below that of the population in the clinical trial. Although this may seem to be merely a hypothetical concern, it is salient in light of recent data suggesting that the relationship between A1C and cardiovascular risk in select groups may be better represented by a J-shaped curve1  as opposed to the linear relationship we have seen in observational studies. The good news, at present, is that the average values in the general population are so high that this is probably still a largely theoretical concern.

Finally, one needs to do “something.” This is hard. “Something” could range from reminders to financial rewards to fundamentally changing the system. Never mind that the “something” is usually not proven to work, but rather involves trial and error. Even when there are data to show that some improvement effort has benefit, there is still the problem of context.

The adage that “once you’ve seen one, you’ve seen one” seems particularly relevant to improvement. Frequently, the success of a program described in an article is a function of variables not described in the methods section of the article, such as extent of leadership involvement, participant buy-in, staff enthusiasm, and numerous local cultural variables that are so important for successful implementation.

Still, there are common strategies to figure out what the “something” should be. Certainly, one should map out the steps that lead to the desired parameter. Certainly, one should measure those steps (frequently called processes). Then, well, one has to get creative. Figure out some way to affect those steps or rearrange them, or “something.”

In obesity, one can make declarations such as, “Eat fewer calories than you expend, and you will lose weight.” With quality improvement, several of the steps are clear but no less difficult to execute than those resulting in weight loss. Add to that an ill-defined, iteratively derived “something,” and you’ve got a challenge.

As Cynthia N. Massey, MSN, ACNP-BC, et al. affirm in their article in this issue (p. 20), quality improvement may seem simple, but it’s hard.

1.
ACCORD Study Group
:
Effect of intensive glucose lowering in type 2 diabetes
.
N Engl J Med
358
:
2545
-
2559
,
2008