To assess whether an electronic health record (EHR)-based diabetes intensification tool can improve the rate of A1C goal attainment among patients with type 2 diabetes and an A1C ≥8%.
An EHR-based tool was developed and sequentially implemented in a large, integrated health system using a four-phase, stepped-wedge design (single pilot site [phase 1] and then three practice site clusters [phases 2–4]; 3 months/phase), with full implementation during phase 4. A1C outcomes, tool usage, and treatment intensification metrics were compared retrospectively at implementation (IMP) sites versus nonimplementation (non-IMP) sites with sites matched on patient population characteristics using overlap propensity score weighting.
Overall, tool utilization was low among patient encounters at IMP sites (1,122 of 11,549 [9.7%]). During phases 1–3, the proportions of patients achieving the A1C goal (<8%) were not significantly improved between IMP and non-IMP sites at 6 months (range 42.9–46.5%) or 12 months (range 46.5–53.1%). In phase 3, fewer patients at IMP sites versus non-IMP sites achieved the goal at 12 months (46.7 vs. 52.3%, P = 0.02). In phases 1–3, mean changes in A1C from baseline to 6 and 12 months (range −0.88 to −1.08%) were not significantly different between IMP and non-IMP sites. Times to intensification were similar between IMP and non-IMP sites.
Utilization of a diabetes intensification tool was low and did not influence rates of A1C goal attainment or time to treatment intensification. The low level of tool adoption is itself an important finding highlighting the problem of therapeutic inertia in clinical practice. Testing additional strategies to better incorporate, increase acceptance of, and improve proficiency with EHR-based intensification tools is warranted.
Despite clinical practice guidelines recommending frequent A1C monitoring and aggressive escalation of antidiabetes therapies every 3 months until glycemic targets are reached (1,2), therapy intensification in patients with uncontrolled type 2 diabetes is often inappropriately delayed (3–7). Failure to intensify therapy (or deintensify therapy) when indicated, or therapeutic inertia, is a strong driver of low A1C goal attainment rates (8). National Health and Nutrition Examination Survey data from the United States revealed that the percentage of people with diabetes achieving an A1C <7% worsened from 2007–2010 (57%) to 2015–2018 (51%) (9). In addition, utilization of glucose-lowering medication did not change throughout the 2010s, an alarming finding given the ever-expanding array of effective treatment options (9). Reasons for therapeutic inertia are certainly multifactorial and may include concerns over side effects (e.g., hypoglycemia) related to therapy intensification, patient comorbidities and preferences, lack of provider awareness of the less-than-optimal quality of care being provided, limited encounter times, and treatment costs (1,2,8,10). Ever-expanding workloads, high patient volume, and increased scrutiny on documentation all contribute obstacles to overcoming therapeutic inertia from a provider standpoint. Thus, interventions to overcome therapeutic inertia that increase provider work and/or time during office encounters are unlikely to succeed.
We developed an electronic health record (EHR)-based diabetes intensification tool with the goals of facilitating the identification of patients with an A1C ≥8% and helping clinicians navigate through the intensification process. Herein, we describe the features of the intensification tool, its rollout from pilot study to full implementation across our integrated delivery system, and a robust retrospective analysis of the tool’s uptake and impact on A1C outcomes.
Tool Development and Rollout
A diabetes intensification tool (hereafter referred to as “the tool”) was developed collaboratively between Cleveland Clinic’s Departments of Endocrinology and Internal Medicine, Cleveland Clinic’s Clinical Systems Office, and Merck & Co. (Kenilworth, NJ) for implementation within the enterprise-wide EHR system of Cleveland Clinic (using EPIC MyPractice). The tool included a best practice alert (BPA) that notified physicians of patients with an A1C ≥8% and prompted consideration of therapy intensification. It also provided assistance with the intensification process via a robust interactive Smartform and order Smartset that offered real-time guidance in determining the best means of intensification for a particular patient. The BPA activated, or “fired,” when a clinician who was logged into the EHR using an endocrinology, internal medicine, or family medicine department login, opened the chart of a patient who was ≥18 years of age with a documented A1C ≥8% within the preceding 6 months. BPA firing was suppressed for patients with type 1 diabetes (International Classification of Diseases, 10th edition [ICD-10], code E10.XX in active problem list or encounter diagnosis within prior 12 months) or documentation of current pregnancy. The BPA displayed the three most recent A1C values and dates, allowing providers to assess whether an elevated A1C was related to an acute event, nonadherence to therapy, or a chronic issue clearly requiring intensification of therapy. The BPA prompted the provider to consider intensification of treatment, or the provider could acknowledge the BPA and choose a reason to defer intensification (“other,” “need to assess,” or “nonadherence”).
Providers who clicked to intensify treatment were directed to an interactive Smartform that offered clinical guidance through the intensification process, including a summary of key information that could affect treatment decisions (e.g., the patient’s weight, BMI, estimated glomerular filtration rate [eGFR], current diabetes medications, and medication allergies). Intensification options included specialist consultations (e.g., with endocrinology, nutrition, pharmacy, diabetes education, or the weight management programs) and new type 2 diabetes medication orders (e.g., oral or injectable antidiabetes therapies and insulin). The Smartform allowed providers to tailor new type 2 diabetes medication choices to patient-specific care goals (e.g., cardiovascular risk reduction, A1C lowering, hypoglycemia risk, treatment cost, and weight loss) and guided recommendations accordingly. If more than one care goal was selected, the Smartform would flex in real-time according to an algorithm and recommend the diabetes medication classes that were most likely to achieve all selected goals. The lists and rankings of diabetes medication classes recommended by the Smartform algorithm were based on consolidated input from American Diabetes Association and American Association of Clinical Endocrinologists guidelines, as well as the expert opinions of the clinician experts who helped with the Smartform development. The Smartform also provided a review of patients’ diabetes-related health maintenance status and allowed providers to quickly place orders to satisfy any gaps or overdue testing (e.g., vaccines, urine albumin-to-creatinine assessments, and dilated eye exams).
Upon completion of the Smartform, providers were directed to the final section of the tool, the Smartset, which was pre-populated with relevant orders for prescriptions, laboratory tests, consultations, and/or follow-up visits based on the selections and choices made in the Smartform. The Smartset included medication details such as eGFR thresholds for dosage adjustments where relevant and U.S. Food and Drug Administration–approved indications, including indications other than glycemic control. For ease of prescribing, prescriptions for antidiabetes therapies were pre-populated for 30- and/or 90-day supplies.
A 3-month pilot phase of the tool was conducted (from 9 July to 9 October 2018) at one practice site that was well balanced in terms of internal medicine, family medicine, and endocrinology providers (physicians, nurse practitioners, and physician assistants). A provider feedback System Usability Scale (SUS) (11) was administered at the end of the pilot to evaluate the tool and was completed by 10 of 17 participating clinicians. The mean SUS score was 58.0 ± 13.8 (on a scale of 0–100, with 100 indicating the best imaginable), which was within a marginal scoring range between not acceptable and acceptable (12). In addition, patients’ perceptions of shared medical decision-making with use of the tool were assessed (n = 82 patients) through completion of the three-question CollaboRATE survey (13), with responses ranging from 0 (“No effort was made”) to 9 (“Every effort was made”); the three questions received ratings of 9 from 87, 93, and 92% of patient respondents. These findings propelled the decision to proceed with tool implementation.
A staged rollout of the finalized tool was implemented over three groups (“clusters”) of primary care (n = 52) and endocrinology (n = 17) ambulatory setting sites using a nonrandomized stepped-wedge design across three 3-month time intervals (phases 2–4 in Figure 1). The rationale for this design included the practicality of deploying the tool in a stepwise manner across our entire organization and not holding back the intervention to any sites, while also allowing for a retrospective comparison of outcomes at implementation (IMP) sites versus nonimplementation (non-IMP) sites. At each rollout phase, the tool was promoted at internal medicine, family medicine, and endocrinology staff meetings and by an e-mail announcement. A video of the tool was posted on the health care system Intranet. The pilot site had the added benefit of the principal investigator and tool developers being on-site to aid with awareness and troubleshoot problems.
During the pilot (phase 1) and before the phase 2 rollout, some adaptations were made based on feedback from colleagues at the pilot site. These adaptations largely related to updates/alterations of the flow and elements included in the Smartform and Smartset. The timing (firing) of, location of, and means of clearing the BPA were also modified. During the pilot and phase 2, the BPA fired when providers opened a patient’s chart and had a hard-stop design (i.e., a selection was required to clear the BPA and move on with the visit). During phases 3 and 4, the location of the BPA firing was moved to order entry, and it continued to have hard-stop functionality.
Retrospective Analysis
We hypothesized that A1C-related outcomes and treatment intensification efforts in patients with uncontrolled type 2 diabetes would be better among patients in sites exposed to the tool. To study the impact of the tool once full rollout had been achieved, a retrospective analysis was conducted to compare outcomes at IMP versus non-IMP sites.
Methods
The study population included active patients with type 2 diabetes managed by Cleveland Clinic primary care, family medicine, and/or endocrinology staff providers, as well as the providers within these specialties. The Cleveland Clinic Institutional Review Board reviewed and approved all versions of the study protocol and amendments and waived the need for informed consent.
Active patients were defined as those with two ambulatory active visits (including virtual or shared appointments) within the previous 18 months. Otherwise, inclusion criteria were the same requirements that would cause the BPA to fire (≥18 years of age and A1C ≥8% within the past 6 months). Patients were excluded for any of the following ICD-10 codes in their problem list: E10 (type 1 diabetes), E08 (diabetes due to an underlying condition), E09 (drug- or chemical-induced diabetes), E13 (other forms of diabetes), or documentation of current pregnancy.
Outcomes
The primary outcome of interest was the proportion of patients reaching an A1C goal of <8% at 6 and 12 months (±90 days) after the index date, defined as the first day of the study phase time period for both IMP and non-IMP sites. Secondary outcomes of interest included time from index date to treatment intensification, change in A1C from index date to 6 and 12 months later, proportion of patients with documented hypoglycemic episodes, and proportions of providers using the tool at IMP sites. “Treatment intensification” could include orders for specialty consultations and/or weight management programs and orders for new type 2 diabetes medications. Because of limitations and variability in prescription order entry into the EHR, particularly the dose and frequency of medications and their frequency of administration (free text prescriptions vs. structured prescriptions), dose adjustments of existing therapies could not be recognized as a form of intensification.
Statistical Methods
Patient and provider characteristics were summarized, with continuous variables reported as medians with interquartile ranges (IQRs) and categorical variables reported using counts and percentages. A1C values were determined for the index date and 6 and 12 months (±90 days) post-index date in each study phase. Laboratory results closest to and prior to the index date were used as baseline measures. If multiple results were available on a given day, the lowest value was used. Hypoglycemia was identified by International Classification of Diseases, 9th edition [ICD-9], or ICD-10 codes and treated as categorical, scoring patients with any number of episodes greater than zero equally.
For phases 1–3, outcomes between IMP and non-IMP sites were compared using the Mann-Whitney U test (equivalent to Wilcoxon’s rank sum test) for continuous variables and the χ2 test for categorical variables, with a significance level of 0.05. Phase 4 involved full implementation of the tool at all sites; thus, comparative analyses were not possible. Data for phase 4 are provided descriptively for completeness. The χ2 test was used to test for statistically significant associations between provider actions in response to the BPA (BPA engagement or selection of a reason for “action deferred”) and study phase. BPA response patterns between primary care providers (PCPs) and endocrinology specialists were also assessed using the χ2 test. Significance was determined as P <0.05.
Overlap propensity score weighting was performed to address potential confounding in comparing outcomes between IMP and non-IMP sites given baseline differences in patients’ demographics, treatment patterns, comorbidities, and site type (endocrinology vs. primary care). This method achieves an exact balance of covariates and, relative to propensity matching, avoids arbitrary match-defining cutoffs (14,15).
Results
Patient Characteristics and Outcomes
The patient composition of the IMP and non-IMP cohorts was unique at each subsequent phase of tool implementation as a consequence of the stepped-wedge implementation design. In phase 1 (pilot), there were 303 eligible patients at a single IMP site and 4,805 at non-IMP sites; respective patient numbers (IMP vs. non-IMP sites) during phase 2 were 2,466 and 2,993 patients and during phase 3 were 3,709 and 1,490 patients. Phase 4 included 5,071 patients, all at IMP sites. Table 1 presents overlap propensity score–weighted baseline characteristics for the IMP and non-IMP cohorts in phases 1–3. Propensity weighted characteristics for phase 1–3 cohorts reflected a baseline A1C ranging from 9.25 to 9.36%, mean age ranging from 59.7 to 61.8 years, and mean number of type 2 diabetes medications ranging from 1.07 to 1.13. Supplementary Table S1 provides raw (unweighted) baseline data for IMP and non-IMP cohorts in phases 1–3 and for the entire phase 4 cohort.
. | Study Phase† . | ||
---|---|---|---|
Phase 1 (Pilot) . | Phase 2 . | Phase 3 . | |
Patients, n IMP sites Non-IMP sites | 303 4,805 | 2,466 2,993 | 3,709 1,490 |
Mean baseline A1C, % | 9.3 | 9.4 | 9.3 |
Mean age, years | 61.8 | 60.1 | 59.7 |
Male sex, % | 48.8 | 53.4 | 52.3 |
Race, % Caucasian Black Other | 78.0 16.2 5.8 | 72.3 19.5 8.2 | 75.3 15.9 8.8 |
Smoking, % Current smoker Former smoker Nonsmoker | 10.1 38.9 51.0 | 14.8 36.7 48.5 | 15.4 38.3 46.3 |
Mean BMI, kg/m2 | 34.7 | 34.3 | 34.4 |
Insurance, % Commercial Medicaid Medicare Other | 39.0 7.2 40.2 13.6 | 35.8 11.6 42.8 9.8 | 37.6 12.3 42.0 8.1 |
Site type, % Endocrinology Family medicine Internal medicine | 37.5 36.9 25.6 | 16.7 40.0 43.3 | 8.2 46.3 45.5 |
Mean baseline type 2 diabetes medications, n | 1.07 | 1.13 | 1.11 |
Mean eGFR, mL/min/1.73 m2 | 65.3 | 77.1 | 71.7 |
Mean Charlson comorbidity score | 4.9 | 4.8 | 4.8 |
Obesity, % | 75.1 | 69.6 | 71.2 |
Chronic kidney disease, % | 40.8 | 44.1 | 42.4 |
Dementia, % | 14.3 | 11.9 | 13.1 |
Psychiatric conditions, % | 61.5 | 62.8 | 65.4 |
Substance abuse, % | 6.7 | 10.2 | 11.6 |
Depression, % | 49.3 | 48.4 | 50.8 |
Cognitive impairment, % | 8.3 | 8.6 | 8.3 |
Cardiovascular disease, % | 20.9 | 18.1 | 18.7 |
Congestive heart failure, % | 5.9 | 5.4 | 4.8 |
. | Study Phase† . | ||
---|---|---|---|
Phase 1 (Pilot) . | Phase 2 . | Phase 3 . | |
Patients, n IMP sites Non-IMP sites | 303 4,805 | 2,466 2,993 | 3,709 1,490 |
Mean baseline A1C, % | 9.3 | 9.4 | 9.3 |
Mean age, years | 61.8 | 60.1 | 59.7 |
Male sex, % | 48.8 | 53.4 | 52.3 |
Race, % Caucasian Black Other | 78.0 16.2 5.8 | 72.3 19.5 8.2 | 75.3 15.9 8.8 |
Smoking, % Current smoker Former smoker Nonsmoker | 10.1 38.9 51.0 | 14.8 36.7 48.5 | 15.4 38.3 46.3 |
Mean BMI, kg/m2 | 34.7 | 34.3 | 34.4 |
Insurance, % Commercial Medicaid Medicare Other | 39.0 7.2 40.2 13.6 | 35.8 11.6 42.8 9.8 | 37.6 12.3 42.0 8.1 |
Site type, % Endocrinology Family medicine Internal medicine | 37.5 36.9 25.6 | 16.7 40.0 43.3 | 8.2 46.3 45.5 |
Mean baseline type 2 diabetes medications, n | 1.07 | 1.13 | 1.11 |
Mean eGFR, mL/min/1.73 m2 | 65.3 | 77.1 | 71.7 |
Mean Charlson comorbidity score | 4.9 | 4.8 | 4.8 |
Obesity, % | 75.1 | 69.6 | 71.2 |
Chronic kidney disease, % | 40.8 | 44.1 | 42.4 |
Dementia, % | 14.3 | 11.9 | 13.1 |
Psychiatric conditions, % | 61.5 | 62.8 | 65.4 |
Substance abuse, % | 6.7 | 10.2 | 11.6 |
Depression, % | 49.3 | 48.4 | 50.8 |
Cognitive impairment, % | 8.3 | 8.6 | 8.3 |
Cardiovascular disease, % | 20.9 | 18.1 | 18.7 |
Congestive heart failure, % | 5.9 | 5.4 | 4.8 |
Values represent either weighted proportions (for categorical variables) or weighted means (for numeric variables). Phase 1 took place from 9 July to 9 October 2018 (BPA availability continued until the start of phase 2). Phase 2 took place from 26 March to 24 June 2019. Phase 3 took place from 25 June to 23 September 2019.
Weighted propensity score matching was performed on the IMP and non-IMP cohorts for each study phase; baseline characteristics were intentionally weighted/evened out between the cohorts. Thus, the baseline propensity score–weighted characteristics for the IMP and non-IMP cohorts were, by design, identical. Raw, unweighted baseline cohort characteristics are provided in Supplementary Table 1.
Phase 4 (from 25 September to 25 December 2019) involved full implementation of the tool at all sites; thus, propensity score weighting was not conducted on those data. Raw (unweighted) baseline data for phase 4 are provided in Supplementary Table S1.
After overlap propensity score weighting of phase 1–3 data, the proportions of patients achieving the A1C goal (<8%) was not significantly different between IMP and non-IMP sites at 6 months (range 42.9–46.5%) or 12 months (range 46.5–53.1%) after the index date (Table 2 and Supplementary Figure S1), with the only exception being phase 3, in which the proportion of patients achieving the A1C goal at 12 months was lower among IMP sites compared with non-IMP sites (46.7 vs. 52.3%, P = 0.02). Similarly, after overlap propensity score weighting, mean changes in A1C from baseline to 6 and 12 months were not significantly different between IMP and non-IMP sites in phases 1–3 (range −0.88 to −1.08%).
. | Phase 1 (Pilot) . | Phase 2 . | Phase 3 . | Phase 4 . | ||||||
---|---|---|---|---|---|---|---|---|---|---|
. | Non-IMP Sites . | IMP Sites . | P . | Non-IMP Sites . | IMP Sites . | P . | Non-IMP Sites . | IMP Sites . | P . | All IMP Sites . |
Patients, n | 4,805 | 303 | 2,993 | 2,466 | 1,490 | 3,709 | 5,071 | |||
A1C goal of <8% attained, % of patients 6 months 12 months | 45.9 53.1 | 45.0 50.3 | 0.795 0.402 | 46.3 48.4 | 46.5 46.5 | 0.866 0.318 | 42.9 52.3 | 44.0 46.7 | 0.568 0.02 | 44.1* 44.2† |
Mean A1C change, % 6 months 12 months | −0.88 −1.08 | −1.03 −1.08 | 0.177 0.984 | −1.01 −1.01 | −1.07 −0.90 | 0.287 0.111 | −0.92 −1.08 | −0.89 −0.92 | 0.726 0.093 | −0.80 −0.75 |
Mean time to intensification, days | 194.1 | 167.6 | 0.063 | 192.0 | 185.4 | 0.247 | 179.7 | 183.5 | 0.514 | 141.5 |
Intensification, % of patients New antihyperglycemic medication order New insulin order Endocrinology consultation Nutrition consultation Pharmacy consultation Diabetes education consultation Weight management consultation | 88.7 33.0 18.3 2.2 1.8 3.2 2.7 3.8 | 91.8 40.4 19.4 1.6 1.0 2.0 0.0 2.7 | 0.101 0.01 0.652 0.530 0.358 0.268 <0.001 0.306 | 80.7 29.2 14.3 3.3 1.7 4.9 3.0 4.9 | 81.0 31.6 13.4 2.6 2.0 3.1 1.6 4.4 | 0.75 0.061 0.364 0.158 0.528 0.002 0.001 0.461 | 74.9 39.3 16.8 3.2 2.6 7.6 2.5 5.4 | 76.6 39.0 16.9 4.1 2.6 5.0 3.6 6.4 | 0.218 0.854 0.967 0.135 0.977 0.001 0.074 0.213 | 63.6 43.0 21.5 3.8 2.5 7.0 3.9 6.0 |
. | Phase 1 (Pilot) . | Phase 2 . | Phase 3 . | Phase 4 . | ||||||
---|---|---|---|---|---|---|---|---|---|---|
. | Non-IMP Sites . | IMP Sites . | P . | Non-IMP Sites . | IMP Sites . | P . | Non-IMP Sites . | IMP Sites . | P . | All IMP Sites . |
Patients, n | 4,805 | 303 | 2,993 | 2,466 | 1,490 | 3,709 | 5,071 | |||
A1C goal of <8% attained, % of patients 6 months 12 months | 45.9 53.1 | 45.0 50.3 | 0.795 0.402 | 46.3 48.4 | 46.5 46.5 | 0.866 0.318 | 42.9 52.3 | 44.0 46.7 | 0.568 0.02 | 44.1* 44.2† |
Mean A1C change, % 6 months 12 months | −0.88 −1.08 | −1.03 −1.08 | 0.177 0.984 | −1.01 −1.01 | −1.07 −0.90 | 0.287 0.111 | −0.92 −1.08 | −0.89 −0.92 | 0.726 0.093 | −0.80 −0.75 |
Mean time to intensification, days | 194.1 | 167.6 | 0.063 | 192.0 | 185.4 | 0.247 | 179.7 | 183.5 | 0.514 | 141.5 |
Intensification, % of patients New antihyperglycemic medication order New insulin order Endocrinology consultation Nutrition consultation Pharmacy consultation Diabetes education consultation Weight management consultation | 88.7 33.0 18.3 2.2 1.8 3.2 2.7 3.8 | 91.8 40.4 19.4 1.6 1.0 2.0 0.0 2.7 | 0.101 0.01 0.652 0.530 0.358 0.268 <0.001 0.306 | 80.7 29.2 14.3 3.3 1.7 4.9 3.0 4.9 | 81.0 31.6 13.4 2.6 2.0 3.1 1.6 4.4 | 0.75 0.061 0.364 0.158 0.528 0.002 0.001 0.461 | 74.9 39.3 16.8 3.2 2.6 7.6 2.5 5.4 | 76.6 39.0 16.9 4.1 2.6 5.0 3.6 6.4 | 0.218 0.854 0.967 0.135 0.977 0.001 0.074 0.213 | 63.6 43.0 21.5 3.8 2.5 7.0 3.9 6.0 |
Data for phases 1–3 represent overlap propensity score–weighted outcomes and are reported as weighted proportions for categorical variables or weighted means for numeric variables; data for phase 4 (full implementation of the BPA tool) are unweighted medians and proportions. Bold type indicates statistical significance.
For phase 4, 6-month A1C data were missing for 2,157 patients (43%). The percentage reported here is based on a denominator of patients who did have data (n = 2,914).
For phase 4, 12-month A1C data were missing for 3,723 patients (73%). The percentage reported here is based on a denominator of patients who did have data (n = 1,348).
Mean time to treatment intensification was not significantly different between the IMP and non-IMP sites during phase 1 (167.6 vs. 194.1 days, P = 0.06), phase 2 (185.4 vs. 192.0 days, P = 0.25), or phase 3 (183.5 vs. 179.7 days, P = 0.51) (Table 2).
Hypoglycemic episodes were infrequent, and median and IQR ranges for both IMP and non-IMP sites in all study phases were all calculated to be zero. The proportion of patients with at least one hypoglycemic episode was 1.1% in IMP and non-IMP sites.
At the end of phase 4, when the tool had been fully implemented at all study sites, A1C goal attainment was observed in 1,286 of 2,914 patients with data (44.1%) at 6 months and 596 of 1,348 patients with data (44.2%) at 12 months (Table 2 and Supplementary Figure S1).
Provider Utilization of the Tool
Provider characteristics were summarized for phase 4 when the tool was fully implemented at all sites. A total of 3,873 physicians and 1,198 advanced practice providers were exposed to the tool. The median age of providers was 48.7 years (IQR 40.8–58.0 years), and 54% were female. The median length of provider experience was 7.4 years (IQR 4.2–14.0 years). A majority of tool exposure visits occurred in primary care settings (37.7% in family medicine, 41.8% in internal medicine, and 20.5% in endocrinology sites).
Across all study phases, tool utilization (treatment intensification) at IMP sites was low, at 9.7% (1,122 of 11,549 patient encounters). During the pilot phase, “Click here to intensify” within the BPA was selected at a significantly higher frequency as compared with during phases 2–4 among both PCPs and endocrinology specialists (Table 3). “Nonadherence” was selected as a reason to defer action in larger percentages of primary care (6.1–14.7%) compared with endocrinology (0–6.3%) encounters. During phases 2–4, the reason for deferring action was most often “Other” among both PCPs (65.2–75.7% of encounters) and endocrinology specialists (53.2–56.3% of encounters). Overall, there was an observed pattern of higher utilization rates among sites exposed longer to the tool (data not shown). PCPs were significantly more likely than endocrinology providers to defer action (P <0.001).
. | Encounters . | P . | |||
---|---|---|---|---|---|
Phase 1 (Pilot) . | Phase 2 . | Phase 3 . | Phase 4 . | ||
Endocrinology encounters, n BPA engaged* Action deferred Need to assess† Other Nonadherence | 151 96.7 0.0 3.3 0.0 | 953 40.5 0.0 53.2 6.3 | 1,088 30.2 8.9 56.3 4.5 | 1,222 30.0 10.1 54.4 5.4 | <0.001 <0.001 <0.001 <0.001 |
Primary care encounters, n BPA engaged* Action deferred Need to assess† Other Nonadherence | 244 52.9 0.0 41.0 6.1 | 2,482 9.5 0.1 75.7 14.7 | 3,156 5.4 17.9 65.2 11.4 | 4,672 6.0 18.1 66.0 9.9 | <0.001 <0.001 <0.001 <0.001 |
. | Encounters . | P . | |||
---|---|---|---|---|---|
Phase 1 (Pilot) . | Phase 2 . | Phase 3 . | Phase 4 . | ||
Endocrinology encounters, n BPA engaged* Action deferred Need to assess† Other Nonadherence | 151 96.7 0.0 3.3 0.0 | 953 40.5 0.0 53.2 6.3 | 1,088 30.2 8.9 56.3 4.5 | 1,222 30.0 10.1 54.4 5.4 | <0.001 <0.001 <0.001 <0.001 |
Primary care encounters, n BPA engaged* Action deferred Need to assess† Other Nonadherence | 244 52.9 0.0 41.0 6.1 | 2,482 9.5 0.1 75.7 14.7 | 3,156 5.4 17.9 65.2 11.4 | 4,672 6.0 18.1 66.0 9.9 | <0.001 <0.001 <0.001 <0.001 |
Data are % unless otherwise noted. Phase 1 took place from 9 July to 9 October 2018. (BPA availability continued until the start of phase 2.) Phase 2 took place from 26 March to 24 June 2019. Phase 3 took place from 25 June to 23 September 2019. Phase 4 took place from 25 September to 25 December 2019.
“Click here to intensify.”
Option was added after pilot phase.
Discussion
This study did not find any significant benefits in A1C outcomes or treatment intensification metrics between sites at which the BPA tool was implemented compared with sites where it was not implemented. There was no observed improvement in the proportion of patients attaining an A1C <8% among IMP sites, and mean A1C changes from baseline were similar regardless of whether the tool was implemented. Mean A1C values improved by ∼1.0% irrespective of tool implementation and may mostly reflect regression to the mean. Overall, tool utilization was very low among providers (9.7% of encounters), and there was no significant difference in time to intensification between IMP and non-IMP sites.
The low rate of provider engagement with the BPA could be viewed as an obstacle to this study’s ability to robustly evaluate the tool’s usefulness in improving type 2 diabetes outcomes. Yet, the low adoption of the tool is itself a very important and concerning outcome of this study and one that highlights the problem of therapeutic inertia in routine clinical practice. Modifications made to the tool based on provider feedback, including changing the timing and repositioning the location of the BPA and removing the hard-stop functionality, did not improve utilization. Testing additional strategies to increase acceptance of, and proficiency with, an enterprise-wide EHR-based intensification tool is warranted. Utilization might be improved by creating a less complex tool with a streamlined standardized intervention (i.e., a single opt-in or opt-out order) or by providing a few interventions for a provider to select directly from the BPA at the point of care. Time constraints during office visits and disruption to provider workflows were noted to be very significant barriers that contributed to the tool’s low utilization, and utilization might be enhanced by positioning the tool to be asynchronous to office-based encounters as a resource to use when an A1C test result becomes available. Other strategies may include engaging key institutional stakeholders and providing additional clinician education and engagement. We did observe a trend toward greater tool utilization at sites with relatively longer periods of exposure; thus, tool adoption may simply require more time for widespread uptake.
Poor medication adherence is a well-known barrier to A1C goal attainment in patients with type 2 diabetes (16). “Nonadherence” was selected as the reason to defer treatment intensification in ∼5–15% of encounters during phases 2 and 3 of our study. These deferrals likely reflected appropriate clinical next steps characterized by addressing nonadherence and resuming the current therapy, both of which would be more appropriate than adding more therapy. Although nonadherence certainly can be a major barrier, communication between provider and patient is crucial to ensure a mutual understanding of treatment goals and patient attitudes about therapy and goal attainment. It is worth noting that recent survey data revealed higher rates of willingness among patients with type 2 diabetes to do more to reach their A1C goal, including making multiple medication changes, than was estimated by physicians in the same survey (17).
The BPA option to “intensify therapy” was selected at a higher frequency during the phase 1 pilot period among both PCPs and endocrinologists, with a more pronounced drop-off in this selection during subsequent study phases in primary care settings. This finding may reflect the presence of an EHR clinical support team and the study principal investigator at the pilot location as a project champion, who encouraged tool utilization and could help with troubleshooting problems and answering questions in real time. This finding highlights that, even if tool utilization appears promising during a pilot, it still may not work with expansion of implementation on a larger scale. In addition, for unclear reasons, we observed a higher frequency of selection of “nonadherence” among PCPs versus endocrinology specialists, which is possibly an indication that patients are more likely to admit nonadherence to their PCP, with whom they may have a long-term relationship, than to a specialist. Rates of BPA deferral for any reason were higher among PCPs, which may reflect addressing multiple comorbidities and complaints during PCP visits, of which type 2 diabetes is just one. In contrast, appointments with an endocrinology specialist are more likely to be diabetes-focused. Finally, we did not observe a drop-off in BPA use after removal of the hard-stop feature or after moving the BPA to the patient’s storyboard, where it could be used asynchronously to the office visit (data not shown). The latter change was implemented after the end of phase 4 and thus was not reflected in the study data. Of note, providers were more satisfied with this approach. They found the timing of the BPA to be inconvenient at either the time of chart opening or the time of order entry, which were the two locations allowed by our EHR, and they felt that the hard-stop functionality was disruptive to their workflows.
Although tool utilization overall was rather low (BPA, Smartform, and Smartset), it was surprising that the BPA alone, particularly with its initial hard-stop design, did not prompt providers to intensify treatment in our population of patients with type 2 diabetes and an A1C ≥8%. Hard-stop closures were shown in a prior study to increase the effectiveness of BPAs for addressing blood pressure medication changes (18) but can also have negative consequences such as avoidance of hard-stopped workflow and delays in care (19). In our study, there was a high frequency of deferrals with the reason indicated as “Other” (53–76% of encounters in nonpilot study phases), which likely reflected nothing more than a means of closing out the BPA without taking further action in many instances. Alert fatigue is a real phenomenon, and data suggest that large percentages of electronic medical alerts are overridden by clinicians (20–22).
The number of hypoglycemic episodes among patients in our study was very low and likely the result, in part, of a lack of structured documentation of such events within the EHR; many events occur outside of the health system, are treated at home, and do not present to the hospital for management. Also, most providers simply document hypoglycemic episodes within patients’ free-text progress notes and not in a structured manner using ICD-9/ICD-10 codes in the EHR.
It is possible that treatment decisions were also based on other factors not addressed by the tool, thereby limiting its use. One such factor may have been insurance coverage of the various antidiabetes medications contained within the Smartset. Perhaps adding “Medications covered by the patient’s insurance” as a care goal selection from the tool’s Smartform could improve the perceived value of the tool and increase utilization. Enhancing clarity regarding a patient’s medication insurance coverage was a feature many providers requested to have incorporated into the tool. Unfortunately, although some third-party apps can assist clinicians with this effort, there are no consistently reliable programs integrated within EHR systems, and those that are available still require users to enter a medication before being prompted to select an alternative medication that is covered by the plan. If the tool could have displayed in real time which medications were covered by a patient’s insurance, utilization may have increased, as this feature would have been very valuable to providers.
Although the lack of benefit of the BPA in our study was disappointing, it was not unexpected. Therapeutic inertia in type 2 diabetes is a well-known but complex phenomenon, with a range of contributing factors that include provider issues such as lack of time, fear of hypoglycemia with insulin therapy, and lack of knowledge of newer medications, but also factors beyond the provider such as system-level barriers (e.g., cost of medications) and patient attitudes about and compliance with therapy, the latter of which can be adversely affected by social determinants of health factors such as food insecurity, low health literacy, and housing insecurity (23,24). A BPA such as the one we evaluated clearly does not address all of these barriers. Furthermore, a recent meta-analysis of studies investigating interventions to overcome therapeutic inertia in individuals with type 2 diabetes found that nonphysician-led interventions (e.g., interventions lead by nurses, pharmacists, and diabetes educators) were more successful than physician-led interventions in lowering A1C (25). It is possible that these providers have fewer time constraints during patient visits than do physicians and can therefore focus in-depth on particular clinical issues. We also acknowledge that an EHR-based BPA tool to “flag” and encourage treatment intensification is only one of many single approaches to address this very complicated challenge, and a multifaceted strategy is likely required to achieve success.
Our study had a number of limitations, one of which was the sole reliance on structured EHR data. Data generated from outside institutions or laboratories could not be captured unless they were documented in a structured manner within our EHR. It is likely that many providers entered outside laboratory results in free-text progress notes, which were not captured for the study. We further recognize that A1C goals are individualized in real-world clinical practice, and our target A1C of <8% may not have been relevant for all patients; thus, lack of intensification may have been purposeful and appropriate in some instances. Another major limitation of our study was that dose adjustments of existing therapies could not be recognized as a form of intensification; therefore, intensification via dose modification could not be evaluated, and thus our data may have underestimated intensification efforts. Finally, as the data for this study were generated within a single regional integrated delivery network, our findings may not be broadly generalizable to other locales or populations.
Although our study had numerous limitations, it did also have considerable strengths, including the large number of participants identified and the robust amount of clinical data collected (patient and provider), which allowed for an extensive depiction of the participants. Furthermore, the prospective development and stepped-wedge implementation of the tool, including a pilot program, added to the strength of the study design. We are not aware of any other published data evaluating an EHR-based BPA for intensifying type 2 diabetes therapy.
Conclusion
This study found that implementation of an outpatient EHR-based diabetes intensification tool did not increase the rate of A1C goal attainment at 6 or 12 months or the time to intensification or time to goal attainment in patients with type 2 diabetes and an A1C ≥8%. Additionally, tool utilization was very low overall. These findings suggest, given the time pressures of office-based encounters, that use of a provider-focused, point-of-care, EHR-based diabetes intensification tool to increase the likelihood of A1C goal attainment may need to be positioned asynchronously to the office visits and further optimized with provider workflows to facilitate adoption. In addition, scheduling of visits dedicated solely to diabetes management may also result in greater tool utilization and a higher likelihood that an intensification of therapy occurs, particularly among PCPs, who often must address multiple medical issues during a single time-constrained office encounter. Using a tool that was ultra-focused on type 2 diabetes management simply may have been too disruptive to provider workflows to facilitate adoption and utilization. Future studies to evaluate different implementation strategies, identify the optimal location of these tools within the EHR, and test different intervention strategies (e.g., an “opt-out” vs. an “opt-in” approach) would be helpful in further evaluating the ability of an EHR-based type 2 diabetes intensification tool to improve A1C goal attainment in patients with uncontrolled type 2 diabetes.
This article contains supplementary material online at https://doi.org/10.2337/figshare.20499105.
Article Information
Acknowledgment
Writing assistance was provided by Sandra Westra, PharmD, of Churchill Communications, in Maplewood, NJ.
Funding
This work was done in collaboration with Merck & Co., Inc., Kenilworth, NJ, the funder of the project.
Duality of Interest
K.M.P. is a consultant for AstraZeneca, Bayer, Corcept, Diasome, Eli Lilly, Merck, Novo Nordisk, and Sanofi; receives research support from Bayer, Merck, Novo Nordisk, and Twinhealth; and receives honoraria as a speaker for AstraZeneca, Corcept, Merck, and Novo Nordisk. S.R. and T.W. are employees of and stock shareholder in Merck & Co. X.J., J.B., M.W.K., and R.S.Z. receive research support from Bayer, Merck, and Novo Nordisk. J.J. and T.R. receive research support from Merck. A.D.M.-H. received research support from Bayer, Merck, and Novo Nordisk and has received funding from the Agency for Healthcare Research and Quality (grant K08HS024128) and grants from the National Heart, Lung, and Blood Institute, the National Institutes of Health’s National Human Genome Research Institute, Novo Nordisk, Merck & Co., and Boehringer Ingelheim Pharmaceuticals for projects outside of the submitted work.
Author Contributions
K.M.P. contributed to the study design and data analysis and wrote the manuscript. S.R. and T.W. contributed to study design and data analysis and critically reviewed the manuscript, with significant contributions to the discussion. X.J. and T.R. contributed to data analysis. J.J. contributed to data extraction. J.B. critically reviewed and contributed to manuscript revisions and made significant contributions to the discussion. M.W.K. contributed to analysis design and to manuscript revisions and made significant contributions to the discussion. R.S.Z. contributed to the study design and manuscript revisions and made significant contributions to the discussion. A.D.M.-H. contributed to the study design and data analysis, designed the implementation component of the study and critically reviewed the manuscript, with significant contributions to the discussion. K.M.P. is the guarantor of this work and, as such, had full access to all the data and takes responsibility for the integrity of the data and the accuracy of the data analysis.