Objective

To assess whether an electronic health record (EHR)-based diabetes intensification tool can improve the rate of A1C goal attainment among patients with type 2 diabetes and an A1C ≥8%.

Methods

An EHR-based tool was developed and sequentially implemented in a large, integrated health system using a four-phase, stepped-wedge design (single pilot site [phase 1] and then three practice site clusters [phases 2–4]; 3 months/phase), with full implementation during phase 4. A1C outcomes, tool usage, and treatment intensification metrics were compared retrospectively at implementation (IMP) sites versus nonimplementation (non-IMP) sites with sites matched on patient population characteristics using overlap propensity score weighting.

Results

Overall, tool utilization was low among patient encounters at IMP sites (1,122 of 11,549 [9.7%]). During phases 1–3, the proportions of patients achieving the A1C goal (<8%) were not significantly improved between IMP and non-IMP sites at 6 months (range 42.9–46.5%) or 12 months (range 46.5–53.1%). In phase 3, fewer patients at IMP sites versus non-IMP sites achieved the goal at 12 months (46.7 vs. 52.3%, P = 0.02). In phases 1–3, mean changes in A1C from baseline to 6 and 12 months (range −0.88 to −1.08%) were not significantly different between IMP and non-IMP sites. Times to intensification were similar between IMP and non-IMP sites.

Conclusion

Utilization of a diabetes intensification tool was low and did not influence rates of A1C goal attainment or time to treatment intensification. The low level of tool adoption is itself an important finding highlighting the problem of therapeutic inertia in clinical practice. Testing additional strategies to better incorporate, increase acceptance of, and improve proficiency with EHR-based intensification tools is warranted.

Despite clinical practice guidelines recommending frequent A1C monitoring and aggressive escalation of antidiabetes therapies every 3 months until glycemic targets are reached (1,2), therapy intensification in patients with uncontrolled type 2 diabetes is often inappropriately delayed (37). Failure to intensify therapy (or deintensify therapy) when indicated, or therapeutic inertia, is a strong driver of low A1C goal attainment rates (8). National Health and Nutrition Examination Survey data from the United States revealed that the percentage of people with diabetes achieving an A1C <7% worsened from 2007–2010 (57%) to 2015–2018 (51%) (9). In addition, utilization of glucose-lowering medication did not change throughout the 2010s, an alarming finding given the ever-expanding array of effective treatment options (9). Reasons for therapeutic inertia are certainly multifactorial and may include concerns over side effects (e.g., hypoglycemia) related to therapy intensification, patient comorbidities and preferences, lack of provider awareness of the less-than-optimal quality of care being provided, limited encounter times, and treatment costs (1,2,8,10). Ever-expanding workloads, high patient volume, and increased scrutiny on documentation all contribute obstacles to overcoming therapeutic inertia from a provider standpoint. Thus, interventions to overcome therapeutic inertia that increase provider work and/or time during office encounters are unlikely to succeed.

We developed an electronic health record (EHR)-based diabetes intensification tool with the goals of facilitating the identification of patients with an A1C ≥8% and helping clinicians navigate through the intensification process. Herein, we describe the features of the intensification tool, its rollout from pilot study to full implementation across our integrated delivery system, and a robust retrospective analysis of the tool’s uptake and impact on A1C outcomes.

A diabetes intensification tool (hereafter referred to as “the tool”) was developed collaboratively between Cleveland Clinic’s Departments of Endocrinology and Internal Medicine, Cleveland Clinic’s Clinical Systems Office, and Merck & Co. (Kenilworth, NJ) for implementation within the enterprise-wide EHR system of Cleveland Clinic (using EPIC MyPractice). The tool included a best practice alert (BPA) that notified physicians of patients with an A1C ≥8% and prompted consideration of therapy intensification. It also provided assistance with the intensification process via a robust interactive Smartform and order Smartset that offered real-time guidance in determining the best means of intensification for a particular patient. The BPA activated, or “fired,” when a clinician who was logged into the EHR using an endocrinology, internal medicine, or family medicine department login, opened the chart of a patient who was ≥18 years of age with a documented A1C ≥8% within the preceding 6 months. BPA firing was suppressed for patients with type 1 diabetes (International Classification of Diseases, 10th edition [ICD-10], code E10.XX in active problem list or encounter diagnosis within prior 12 months) or documentation of current pregnancy. The BPA displayed the three most recent A1C values and dates, allowing providers to assess whether an elevated A1C was related to an acute event, nonadherence to therapy, or a chronic issue clearly requiring intensification of therapy. The BPA prompted the provider to consider intensification of treatment, or the provider could acknowledge the BPA and choose a reason to defer intensification (“other,” “need to assess,” or “nonadherence”).

Providers who clicked to intensify treatment were directed to an interactive Smartform that offered clinical guidance through the intensification process, including a summary of key information that could affect treatment decisions (e.g., the patient’s weight, BMI, estimated glomerular filtration rate [eGFR], current diabetes medications, and medication allergies). Intensification options included specialist consultations (e.g., with endocrinology, nutrition, pharmacy, diabetes education, or the weight management programs) and new type 2 diabetes medication orders (e.g., oral or injectable antidiabetes therapies and insulin). The Smartform allowed providers to tailor new type 2 diabetes medication choices to patient-specific care goals (e.g., cardiovascular risk reduction, A1C lowering, hypoglycemia risk, treatment cost, and weight loss) and guided recommendations accordingly. If more than one care goal was selected, the Smartform would flex in real-time according to an algorithm and recommend the diabetes medication classes that were most likely to achieve all selected goals. The lists and rankings of diabetes medication classes recommended by the Smartform algorithm were based on consolidated input from American Diabetes Association and American Association of Clinical Endocrinologists guidelines, as well as the expert opinions of the clinician experts who helped with the Smartform development. The Smartform also provided a review of patients’ diabetes-related health maintenance status and allowed providers to quickly place orders to satisfy any gaps or overdue testing (e.g., vaccines, urine albumin-to-creatinine assessments, and dilated eye exams).

Upon completion of the Smartform, providers were directed to the final section of the tool, the Smartset, which was pre-populated with relevant orders for prescriptions, laboratory tests, consultations, and/or follow-up visits based on the selections and choices made in the Smartform. The Smartset included medication details such as eGFR thresholds for dosage adjustments where relevant and U.S. Food and Drug Administration–approved indications, including indications other than glycemic control. For ease of prescribing, prescriptions for antidiabetes therapies were pre-populated for 30- and/or 90-day supplies.

A 3-month pilot phase of the tool was conducted (from 9 July to 9 October 2018) at one practice site that was well balanced in terms of internal medicine, family medicine, and endocrinology providers (physicians, nurse practitioners, and physician assistants). A provider feedback System Usability Scale (SUS) (11) was administered at the end of the pilot to evaluate the tool and was completed by 10 of 17 participating clinicians. The mean SUS score was 58.0 ± 13.8 (on a scale of 0–100, with 100 indicating the best imaginable), which was within a marginal scoring range between not acceptable and acceptable (12). In addition, patients’ perceptions of shared medical decision-making with use of the tool were assessed (n = 82 patients) through completion of the three-question CollaboRATE survey (13), with responses ranging from 0 (“No effort was made”) to 9 (“Every effort was made”); the three questions received ratings of 9 from 87, 93, and 92% of patient respondents. These findings propelled the decision to proceed with tool implementation.

A staged rollout of the finalized tool was implemented over three groups (“clusters”) of primary care (n = 52) and endocrinology (n = 17) ambulatory setting sites using a nonrandomized stepped-wedge design across three 3-month time intervals (phases 2–4 in Figure 1). The rationale for this design included the practicality of deploying the tool in a stepwise manner across our entire organization and not holding back the intervention to any sites, while also allowing for a retrospective comparison of outcomes at implementation (IMP) sites versus nonimplementation (non-IMP) sites. At each rollout phase, the tool was promoted at internal medicine, family medicine, and endocrinology staff meetings and by an e-mail announcement. A video of the tool was posted on the health care system Intranet. The pilot site had the added benefit of the principal investigator and tool developers being on-site to aid with awareness and troubleshoot problems.

During the pilot (phase 1) and before the phase 2 rollout, some adaptations were made based on feedback from colleagues at the pilot site. These adaptations largely related to updates/alterations of the flow and elements included in the Smartform and Smartset. The timing (firing) of, location of, and means of clearing the BPA were also modified. During the pilot and phase 2, the BPA fired when providers opened a patient’s chart and had a hard-stop design (i.e., a selection was required to clear the BPA and move on with the visit). During phases 3 and 4, the location of the BPA firing was moved to order entry, and it continued to have hard-stop functionality.

We hypothesized that A1C-related outcomes and treatment intensification efforts in patients with uncontrolled type 2 diabetes would be better among patients in sites exposed to the tool. To study the impact of the tool once full rollout had been achieved, a retrospective analysis was conducted to compare outcomes at IMP versus non-IMP sites.

Methods

The study population included active patients with type 2 diabetes managed by Cleveland Clinic primary care, family medicine, and/or endocrinology staff providers, as well as the providers within these specialties. The Cleveland Clinic Institutional Review Board reviewed and approved all versions of the study protocol and amendments and waived the need for informed consent.

Active patients were defined as those with two ambulatory active visits (including virtual or shared appointments) within the previous 18 months. Otherwise, inclusion criteria were the same requirements that would cause the BPA to fire (≥18 years of age and A1C ≥8% within the past 6 months). Patients were excluded for any of the following ICD-10 codes in their problem list: E10 (type 1 diabetes), E08 (diabetes due to an underlying condition), E09 (drug- or chemical-induced diabetes), E13 (other forms of diabetes), or documentation of current pregnancy.

Outcomes

The primary outcome of interest was the proportion of patients reaching an A1C goal of <8% at 6 and 12 months (±90 days) after the index date, defined as the first day of the study phase time period for both IMP and non-IMP sites. Secondary outcomes of interest included time from index date to treatment intensification, change in A1C from index date to 6 and 12 months later, proportion of patients with documented hypoglycemic episodes, and proportions of providers using the tool at IMP sites. “Treatment intensification” could include orders for specialty consultations and/or weight management programs and orders for new type 2 diabetes medications. Because of limitations and variability in prescription order entry into the EHR, particularly the dose and frequency of medications and their frequency of administration (free text prescriptions vs. structured prescriptions), dose adjustments of existing therapies could not be recognized as a form of intensification.

Statistical Methods

Patient and provider characteristics were summarized, with continuous variables reported as medians with interquartile ranges (IQRs) and categorical variables reported using counts and percentages. A1C values were determined for the index date and 6 and 12 months (±90 days) post-index date in each study phase. Laboratory results closest to and prior to the index date were used as baseline measures. If multiple results were available on a given day, the lowest value was used. Hypoglycemia was identified by International Classification of Diseases, 9th edition [ICD-9], or ICD-10 codes and treated as categorical, scoring patients with any number of episodes greater than zero equally.

For phases 1–3, outcomes between IMP and non-IMP sites were compared using the Mann-Whitney U test (equivalent to Wilcoxon’s rank sum test) for continuous variables and the χ2 test for categorical variables, with a significance level of 0.05. Phase 4 involved full implementation of the tool at all sites; thus, comparative analyses were not possible. Data for phase 4 are provided descriptively for completeness. The χ2 test was used to test for statistically significant associations between provider actions in response to the BPA (BPA engagement or selection of a reason for “action deferred”) and study phase. BPA response patterns between primary care providers (PCPs) and endocrinology specialists were also assessed using the χ2 test. Significance was determined as P <0.05.

Overlap propensity score weighting was performed to address potential confounding in comparing outcomes between IMP and non-IMP sites given baseline differences in patients’ demographics, treatment patterns, comorbidities, and site type (endocrinology vs. primary care). This method achieves an exact balance of covariates and, relative to propensity matching, avoids arbitrary match-defining cutoffs (14,15).

Patient Characteristics and Outcomes

The patient composition of the IMP and non-IMP cohorts was unique at each subsequent phase of tool implementation as a consequence of the stepped-wedge implementation design. In phase 1 (pilot), there were 303 eligible patients at a single IMP site and 4,805 at non-IMP sites; respective patient numbers (IMP vs. non-IMP sites) during phase 2 were 2,466 and 2,993 patients and during phase 3 were 3,709 and 1,490 patients. Phase 4 included 5,071 patients, all at IMP sites. Table 1 presents overlap propensity score–weighted baseline characteristics for the IMP and non-IMP cohorts in phases 1–3. Propensity weighted characteristics for phase 1–3 cohorts reflected a baseline A1C ranging from 9.25 to 9.36%, mean age ranging from 59.7 to 61.8 years, and mean number of type 2 diabetes medications ranging from 1.07 to 1.13. Supplementary Table S1 provides raw (unweighted) baseline data for IMP and non-IMP cohorts in phases 1–3 and for the entire phase 4 cohort.

After overlap propensity score weighting of phase 1–3 data, the proportions of patients achieving the A1C goal (<8%) was not significantly different between IMP and non-IMP sites at 6 months (range 42.9–46.5%) or 12 months (range 46.5–53.1%) after the index date (Table 2 and Supplementary Figure S1), with the only exception being phase 3, in which the proportion of patients achieving the A1C goal at 12 months was lower among IMP sites compared with non-IMP sites (46.7 vs. 52.3%, P = 0.02). Similarly, after overlap propensity score weighting, mean changes in A1C from baseline to 6 and 12 months were not significantly different between IMP and non-IMP sites in phases 1–3 (range −0.88 to −1.08%).

Mean time to treatment intensification was not significantly different between the IMP and non-IMP sites during phase 1 (167.6 vs. 194.1 days, P = 0.06), phase 2 (185.4 vs. 192.0 days, P = 0.25), or phase 3 (183.5 vs. 179.7 days, P = 0.51) (Table 2).

Hypoglycemic episodes were infrequent, and median and IQR ranges for both IMP and non-IMP sites in all study phases were all calculated to be zero. The proportion of patients with at least one hypoglycemic episode was 1.1% in IMP and non-IMP sites.

At the end of phase 4, when the tool had been fully implemented at all study sites, A1C goal attainment was observed in 1,286 of 2,914 patients with data (44.1%) at 6 months and 596 of 1,348 patients with data (44.2%) at 12 months (Table 2 and Supplementary Figure S1).

Provider Utilization of the Tool

Provider characteristics were summarized for phase 4 when the tool was fully implemented at all sites. A total of 3,873 physicians and 1,198 advanced practice providers were exposed to the tool. The median age of providers was 48.7 years (IQR 40.8–58.0 years), and 54% were female. The median length of provider experience was 7.4 years (IQR 4.2–14.0 years). A majority of tool exposure visits occurred in primary care settings (37.7% in family medicine, 41.8% in internal medicine, and 20.5% in endocrinology sites).

Across all study phases, tool utilization (treatment intensification) at IMP sites was low, at 9.7% (1,122 of 11,549 patient encounters). During the pilot phase, “Click here to intensify” within the BPA was selected at a significantly higher frequency as compared with during phases 2–4 among both PCPs and endocrinology specialists (Table 3). “Nonadherence” was selected as a reason to defer action in larger percentages of primary care (6.1–14.7%) compared with endocrinology (0–6.3%) encounters. During phases 2–4, the reason for deferring action was most often “Other” among both PCPs (65.2–75.7% of encounters) and endocrinology specialists (53.2–56.3% of encounters). Overall, there was an observed pattern of higher utilization rates among sites exposed longer to the tool (data not shown). PCPs were significantly more likely than endocrinology providers to defer action (P <0.001).

This study did not find any significant benefits in A1C outcomes or treatment intensification metrics between sites at which the BPA tool was implemented compared with sites where it was not implemented. There was no observed improvement in the proportion of patients attaining an A1C <8% among IMP sites, and mean A1C changes from baseline were similar regardless of whether the tool was implemented. Mean A1C values improved by ∼1.0% irrespective of tool implementation and may mostly reflect regression to the mean. Overall, tool utilization was very low among providers (9.7% of encounters), and there was no significant difference in time to intensification between IMP and non-IMP sites.

The low rate of provider engagement with the BPA could be viewed as an obstacle to this study’s ability to robustly evaluate the tool’s usefulness in improving type 2 diabetes outcomes. Yet, the low adoption of the tool is itself a very important and concerning outcome of this study and one that highlights the problem of therapeutic inertia in routine clinical practice. Modifications made to the tool based on provider feedback, including changing the timing and repositioning the location of the BPA and removing the hard-stop functionality, did not improve utilization. Testing additional strategies to increase acceptance of, and proficiency with, an enterprise-wide EHR-based intensification tool is warranted. Utilization might be improved by creating a less complex tool with a streamlined standardized intervention (i.e., a single opt-in or opt-out order) or by providing a few interventions for a provider to select directly from the BPA at the point of care. Time constraints during office visits and disruption to provider workflows were noted to be very significant barriers that contributed to the tool’s low utilization, and utilization might be enhanced by positioning the tool to be asynchronous to office-based encounters as a resource to use when an A1C test result becomes available. Other strategies may include engaging key institutional stakeholders and providing additional clinician education and engagement. We did observe a trend toward greater tool utilization at sites with relatively longer periods of exposure; thus, tool adoption may simply require more time for widespread uptake.

Poor medication adherence is a well-known barrier to A1C goal attainment in patients with type 2 diabetes (16). “Nonadherence” was selected as the reason to defer treatment intensification in ∼5–15% of encounters during phases 2 and 3 of our study. These deferrals likely reflected appropriate clinical next steps characterized by addressing nonadherence and resuming the current therapy, both of which would be more appropriate than adding more therapy. Although nonadherence certainly can be a major barrier, communication between provider and patient is crucial to ensure a mutual understanding of treatment goals and patient attitudes about therapy and goal attainment. It is worth noting that recent survey data revealed higher rates of willingness among patients with type 2 diabetes to do more to reach their A1C goal, including making multiple medication changes, than was estimated by physicians in the same survey (17).

The BPA option to “intensify therapy” was selected at a higher frequency during the phase 1 pilot period among both PCPs and endocrinologists, with a more pronounced drop-off in this selection during subsequent study phases in primary care settings. This finding may reflect the presence of an EHR clinical support team and the study principal investigator at the pilot location as a project champion, who encouraged tool utilization and could help with troubleshooting problems and answering questions in real time. This finding highlights that, even if tool utilization appears promising during a pilot, it still may not work with expansion of implementation on a larger scale. In addition, for unclear reasons, we observed a higher frequency of selection of “nonadherence” among PCPs versus endocrinology specialists, which is possibly an indication that patients are more likely to admit nonadherence to their PCP, with whom they may have a long-term relationship, than to a specialist. Rates of BPA deferral for any reason were higher among PCPs, which may reflect addressing multiple comorbidities and complaints during PCP visits, of which type 2 diabetes is just one. In contrast, appointments with an endocrinology specialist are more likely to be diabetes-focused. Finally, we did not observe a drop-off in BPA use after removal of the hard-stop feature or after moving the BPA to the patient’s storyboard, where it could be used asynchronously to the office visit (data not shown). The latter change was implemented after the end of phase 4 and thus was not reflected in the study data. Of note, providers were more satisfied with this approach. They found the timing of the BPA to be inconvenient at either the time of chart opening or the time of order entry, which were the two locations allowed by our EHR, and they felt that the hard-stop functionality was disruptive to their workflows.

Although tool utilization overall was rather low (BPA, Smartform, and Smartset), it was surprising that the BPA alone, particularly with its initial hard-stop design, did not prompt providers to intensify treatment in our population of patients with type 2 diabetes and an A1C ≥8%. Hard-stop closures were shown in a prior study to increase the effectiveness of BPAs for addressing blood pressure medication changes (18) but can also have negative consequences such as avoidance of hard-stopped workflow and delays in care (19). In our study, there was a high frequency of deferrals with the reason indicated as “Other” (53–76% of encounters in nonpilot study phases), which likely reflected nothing more than a means of closing out the BPA without taking further action in many instances. Alert fatigue is a real phenomenon, and data suggest that large percentages of electronic medical alerts are overridden by clinicians (2022).

The number of hypoglycemic episodes among patients in our study was very low and likely the result, in part, of a lack of structured documentation of such events within the EHR; many events occur outside of the health system, are treated at home, and do not present to the hospital for management. Also, most providers simply document hypoglycemic episodes within patients’ free-text progress notes and not in a structured manner using ICD-9/ICD-10 codes in the EHR.

It is possible that treatment decisions were also based on other factors not addressed by the tool, thereby limiting its use. One such factor may have been insurance coverage of the various antidiabetes medications contained within the Smartset. Perhaps adding “Medications covered by the patient’s insurance” as a care goal selection from the tool’s Smartform could improve the perceived value of the tool and increase utilization. Enhancing clarity regarding a patient’s medication insurance coverage was a feature many providers requested to have incorporated into the tool. Unfortunately, although some third-party apps can assist clinicians with this effort, there are no consistently reliable programs integrated within EHR systems, and those that are available still require users to enter a medication before being prompted to select an alternative medication that is covered by the plan. If the tool could have displayed in real time which medications were covered by a patient’s insurance, utilization may have increased, as this feature would have been very valuable to providers.

Although the lack of benefit of the BPA in our study was disappointing, it was not unexpected. Therapeutic inertia in type 2 diabetes is a well-known but complex phenomenon, with a range of contributing factors that include provider issues such as lack of time, fear of hypoglycemia with insulin therapy, and lack of knowledge of newer medications, but also factors beyond the provider such as system-level barriers (e.g., cost of medications) and patient attitudes about and compliance with therapy, the latter of which can be adversely affected by social determinants of health factors such as food insecurity, low health literacy, and housing insecurity (23,24). A BPA such as the one we evaluated clearly does not address all of these barriers. Furthermore, a recent meta-analysis of studies investigating interventions to overcome therapeutic inertia in individuals with type 2 diabetes found that nonphysician-led interventions (e.g., interventions lead by nurses, pharmacists, and diabetes educators) were more successful than physician-led interventions in lowering A1C (25). It is possible that these providers have fewer time constraints during patient visits than do physicians and can therefore focus in-depth on particular clinical issues. We also acknowledge that an EHR-based BPA tool to “flag” and encourage treatment intensification is only one of many single approaches to address this very complicated challenge, and a multifaceted strategy is likely required to achieve success.

Our study had a number of limitations, one of which was the sole reliance on structured EHR data. Data generated from outside institutions or laboratories could not be captured unless they were documented in a structured manner within our EHR. It is likely that many providers entered outside laboratory results in free-text progress notes, which were not captured for the study. We further recognize that A1C goals are individualized in real-world clinical practice, and our target A1C of <8% may not have been relevant for all patients; thus, lack of intensification may have been purposeful and appropriate in some instances. Another major limitation of our study was that dose adjustments of existing therapies could not be recognized as a form of intensification; therefore, intensification via dose modification could not be evaluated, and thus our data may have underestimated intensification efforts. Finally, as the data for this study were generated within a single regional integrated delivery network, our findings may not be broadly generalizable to other locales or populations.

Although our study had numerous limitations, it did also have considerable strengths, including the large number of participants identified and the robust amount of clinical data collected (patient and provider), which allowed for an extensive depiction of the participants. Furthermore, the prospective development and stepped-wedge implementation of the tool, including a pilot program, added to the strength of the study design. We are not aware of any other published data evaluating an EHR-based BPA for intensifying type 2 diabetes therapy.

This study found that implementation of an outpatient EHR-based diabetes intensification tool did not increase the rate of A1C goal attainment at 6 or 12 months or the time to intensification or time to goal attainment in patients with type 2 diabetes and an A1C ≥8%. Additionally, tool utilization was very low overall. These findings suggest, given the time pressures of office-based encounters, that use of a provider-focused, point-of-care, EHR-based diabetes intensification tool to increase the likelihood of A1C goal attainment may need to be positioned asynchronously to the office visits and further optimized with provider workflows to facilitate adoption. In addition, scheduling of visits dedicated solely to diabetes management may also result in greater tool utilization and a higher likelihood that an intensification of therapy occurs, particularly among PCPs, who often must address multiple medical issues during a single time-constrained office encounter. Using a tool that was ultra-focused on type 2 diabetes management simply may have been too disruptive to provider workflows to facilitate adoption and utilization. Future studies to evaluate different implementation strategies, identify the optimal location of these tools within the EHR, and test different intervention strategies (e.g., an “opt-out” vs. an “opt-in” approach) would be helpful in further evaluating the ability of an EHR-based type 2 diabetes intensification tool to improve A1C goal attainment in patients with uncontrolled type 2 diabetes.

This article contains supplementary material online at https://doi.org/10.2337/figshare.20499105.

Article Information

Acknowledgment

Writing assistance was provided by Sandra Westra, PharmD, of Churchill Communications, in Maplewood, NJ.

Funding

This work was done in collaboration with Merck & Co., Inc., Kenilworth, NJ, the funder of the project.

Duality of Interest

K.M.P. is a consultant for AstraZeneca, Bayer, Corcept, Diasome, Eli Lilly, Merck, Novo Nordisk, and Sanofi; receives research support from Bayer, Merck, Novo Nordisk, and Twinhealth; and receives honoraria as a speaker for AstraZeneca, Corcept, Merck, and Novo Nordisk. S.R. and T.W. are employees of and stock shareholder in Merck & Co. X.J., J.B., M.W.K., and R.S.Z. receive research support from Bayer, Merck, and Novo Nordisk. J.J. and T.R. receive research support from Merck. A.D.M.-H. received research support from Bayer, Merck, and Novo Nordisk and has received funding from the Agency for Healthcare Research and Quality (grant K08HS024128) and grants from the National Heart, Lung, and Blood Institute, the National Institutes of Health’s National Human Genome Research Institute, Novo Nordisk, Merck & Co., and Boehringer Ingelheim Pharmaceuticals for projects outside of the submitted work.

Author Contributions

K.M.P. contributed to the study design and data analysis and wrote the manuscript. S.R. and T.W. contributed to study design and data analysis and critically reviewed the manuscript, with significant contributions to the discussion. X.J. and T.R. contributed to data analysis. J.J. contributed to data extraction. J.B. critically reviewed and contributed to manuscript revisions and made significant contributions to the discussion. M.W.K. contributed to analysis design and to manuscript revisions and made significant contributions to the discussion. R.S.Z. contributed to the study design and manuscript revisions and made significant contributions to the discussion. A.D.M.-H. contributed to the study design and data analysis, designed the implementation component of the study and critically reviewed the manuscript, with significant contributions to the discussion. K.M.P. is the guarantor of this work and, as such, had full access to all the data and takes responsibility for the integrity of the data and the accuracy of the data analysis.

1.
American Diabetes Association
.
9. Pharmacologic approaches to glycemic treatment: Standards of Medical Care in Diabetes—2021
.
Diabetes Care
2021
;
44
(
Suppl. 1
):
S111
S124
2.
Garber
AJ
,
Handelsman
Y
,
Grunberger
G
, et al
.
Consensus statement by the American Association of Clinical Endocrinologists and American College of Endocrinology on the comprehensive type 2 diabetes management algorithm: 2020 executive summary
.
Endocr Pract
2020
;
26
:
107
139
3.
Khunti
K
,
Wolden
ML
,
Thorsted
BL
,
Andersen
M
,
Davies
MJ
.
Clinical inertia in people with type 2 diabetes: a retrospective cohort study of more than 80,000 people
.
Diabetes Care
2013
;
36
:
3411
3417
4.
Nichols
GA
,
Koo
YH
,
Shah
SN
.
Delay of insulin addition to oral combination therapy despite inadequate glycemic control: delay of insulin therapy
.
J Gen Intern Med
2007
;
22
:
453
458
5.
Fu
AZ
,
Qiu
Y
,
Davies
MJ
,
Radican
L
,
Engel
SS
.
Treatment intensification in patients with type 2 diabetes who failed metformin monotherapy
.
Diabetes Obes Metab
2011
;
13
:
765
769
6.
Pantalone
KM
,
Wells
BJ
,
Chagin
KM
, et al
.
Intensification of diabetes therapy and time until A1C goal attainment among patients with newly diagnosed type 2 diabetes who fail metformin monotherapy within a large integrated health system
.
Diabetes Care
2016
;
39
:
1527
1534
7.
Pantalone
KM
,
Misra-Hebert
AD
,
Hobbs
TM
, et al
.
Clinical inertia in type 2 diabetes management: evidence from a large, real-world data set
.
Diabetes Care
2018
;
41
:
e113
e114
8.
Karam
SL
,
Dendy
J
,
Polu
S
,
Blonde
L
.
Overview of therapeutic inertia in diabetes: prevalence, causes, and consequences
.
Diabetes Spectr
2020
;
33
:
8
15
9.
Fang
M
,
Wang
D
,
Coresh
J
,
Selvin
E
.
Trends in diabetes treatment and control in U.S. adults, 1999–2018
.
N Engl J Med
2021
;
384
:
2219
2228
10.
Qaseem
A
,
Wilt
TJ
,
Kansagara
D
, et al.;
Clinical Guidelines Committee of the American College of Physicians
.
Hemoglobin A1c targets for glycemic control with pharmacologic therapy for nonpregnant adults with type 2 diabetes mellitus: a guidance statement update from the American College of Physicians
.
Ann Intern Med
2018
;
168
:
569
576
11.
Brooke
J
.
SUS: a quick and dirty usability scale
. In
Usability Evaluation in Industry.
Jordan
PW
,
Thomas
B
,
Weerdmeester
BA
,
McClelland
IL
, Eds.
Bristol, PA
,
Taylor & Francis
,
1996
, p.
189
194
12.
Brooke
J
.
SUS: a retrospective
.
J Usability Stud
2013
;
8
:
29
40
13.
Elwyn
G
,
Barr
PJ
,
Grande
SW
,
Thompson
R
,
Walsh
T
,
Ozanne
EM
.
Developing CollaboRATE: a fast and frugal patient-reported measure of shared decision making in clinical encounters
.
Patient Educ Couns
2013
;
93
:
102
107
14.
Li
F
,
Thomas
LE
,
Li
F
.
Addressing extreme propensity scores via the overlap weights
.
Am J Epidemiol
2019
;
188
:
250
257
15.
Thomas
LE
,
Bonow
RO
,
Pencina
MJ
.
Understanding observational treatment comparisons in the setting of coronavirus disease 2019 (COVID-19)
.
JAMA Cardiol
2020
;
5
:
988
990
16.
Polonsky
WH
,
Henry
RR
.
Poor medication adherence in type 2 diabetes: recognizing the scope of the problem and its key contributors
.
Patient Prefer Adherence
2016
;
10
:
1299
1307
17.
Edelman
SV
,
Wood
R
,
Roberts
M
,
Shubrook
JH
.
Patients with type 2 diabetes are willing to do more to overcome therapeutic inertia: results from a double-blind survey
.
Clin Diabetes
2020
;
38
:
222
229
18.
Ramirez
M
,
Maranon
R
,
Fu
J
, et al
.
Primary care provider adherence to an alert for intensification of diabetes blood pressure medications before and after the addition of a “chart closure” hard stop
.
J Am Med Inform Assoc
2018
;
25
:
1167
1174
19.
Powers
EM
,
Shiffman
RN
,
Melnick
ER
,
Hickner
A
,
Sharifi
M
.
Efficacy and unintended consequences of hard-stop alerts in electronic health record systems: a systematic review
.
J Am Med Inform Assoc
2018
;
25
:
1556
1566
20.
Shah
SN
,
Amato
MG
,
Garlo
KG
,
Seger
DL
,
Bates
DW
.
Renal medication-related clinical decision support (CDS) alerts and overrides in the inpatient setting following implementation of a commercial electronic health record: implications for designing more effective alerts
.
J Am Med Inform Assoc
2021
;
28
:
1081
1087
21.
Isaac
T
,
Weissman
JS
,
Davis
RB
, et al
.
Overrides of medication alerts in ambulatory care
.
Arch Intern Med
2009
;
169
:
305
311
22.
Straichman
YZ
,
Kurnik
D
,
Matok
I
, et al
.
Prescriber response to computerized drug alerts for electronic prescriptions among hospitalized patients
.
Int J Med Inform
2017
;
107
:
70
75
23.
Khunti
S
,
Khunti
K
,
Seidu
S
.
Therapeutic inertia in type 2 diabetes: prevalence, causes, consequences and methods to overcome inertia
.
Ther Adv Endocrinol Metab
2019
;
10
:
2042018819844694
24.
Wilder
ME
,
Kulie
P
,
Jensen
C
, et al
.
The impact of social determinants of health on medication adherence: a systematic review and meta-analysis
.
J Gen Intern Med
2021
;
36
:
1359
1370
25.
Powell
RE
,
Zaccardi
F
,
Beebe
C
, et al
.
Strategies for overcoming therapeutic inertia in type 2 diabetes: a systematic review and meta-analysis
.
Diabetes Obes Metab
2021
;
23
:
2137
2154
Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered. More information is available at https://www.diabetesjournals.org/journals/pages/license.