Time in range (TIR) is gaining ground as an outcome measure in type 1 diabetes trials. However, inclusion of TIR raises several issues for trial design. In this article, the authors begin by defining TIR and describing the current international consensus around TIR targets. They then expand on evidence for the validity of TIR as a primary clinical trial outcome before concluding with some practical, ethical, and logistical implications.
The last decade has seen the emergence of novel technologies that enable continuous blood glucose monitoring (CGM) (1). Although CGM has been available since the early 2000s, uptake has accelerated rapidly in the past 5 years (1,2). There are three types of CGM: retrospective CGM, from which the user is masked to the data; intermittently scanned or “flash” CGM; and real-time CGM (3). These new CGM technologies have shed new light on day-to-day glycemia for people with type 1 diabetes, their care teams, and researchers. Although A1C is an excellent measures of “average” glucose control, CGM methodologies reveal previously inaccessible high and low glycemic excursions and the times of day at which they are most likely to occur (3). The potential utility of this information is clear.
CGM increasingly has become the standard of care in people with type 1 diabetes in developed health care systems (4). For example, data from the T1D Exchange clinic network from a U.S. population demonstrated a greater than fourfold increase in CGM use between 2010–2012 and 2016–2018, from 7 to 30%—a trend that appears to be continuing (5,6). Notably, in an evaluation of adults with type 1 diabetes in one area of Scotland between 2018 and 2020, 77% of insulin pump users and ∼40% of those not using a pump were using CGM technology (7). Indeed, the technology per se has also been demonstrated to improve glycemic control in people with type 1 diabetes (8–10).
A1C, which provides an estimate of average blood glucose concentration during the previous 3 months—the life span of red blood cells (11)—was the basis for the key trials that shaped current international guidelines on glycemic control targets for reducing long-term micro- and macrovascular complications (12,13). However, although A1C has been used to demonstrate the efficacy of interventions, it is evident that it is not a comprehensive marker of overall glycemia. For example, it does not provide information on important parameters of day-to-day glycemic control, and it masks significant issues such as clinically important hypoglycemia (14). Following this logic, two people living with type 1 diabetes who have the same A1C may have significantly different glycemic profiles, with marked differences in hypoglycemia and glycemic variability that are not captured by A1C alone (11). Furthermore, the reliability of A1C measurement can be adversely affected by a variety of conditions (11,15–17). We argue, as is increasingly the consensus, that the additional information provided by new CGM technologies provides relevant, reliable outcome measures for people living with type 1 diabetes that should be captured in the outcome measures of clinical trials.
An important consideration is how CGM data can meaningfully be summarized in a clinical and research context (1). The multiplicity of glycemic parameters generated has the potential for type 1 statistical error unless primary and secondary outcome measures are clearly pre-specified. Several novel parameters of glycemic control have emerged in the last decade (e.g., mean glucose, glycemic variability, and time in range [TIR]). However, it has been increasingly recognized that the proportion of time spent within an optimal glycemic range, now known as TIR, may offer a more holistic reflection of the lived experience of people with diabetes. TIR denotes the percentage of time each day spent at optimal glycemic targets. The concept is growing in importance clinically and in a research context. For example, in a survey of 3,461 people with type 1 diabetes, TIR was identified as the second most important factor (after dietary choices) affecting daily life (18). CGM also affords the measurement of additional meaningful glycemic parameters such as mean glucose and glycemic variability (as measured by SD or coefficient of variation) that may be appropriate secondary outcome measures, whether for observational or interventional (clinical trial) research (1,19). Additionally, estimated A1C is now offered by some CGM systems, with increasing accuracy dependent on the volume of CGM data available (19). We will now examine the emerging evidence supporting TIR as an outcome measure in clinical research in people with diabetes.
TIR: Definitions and Consensus
Before any parameter can be effectively used both in clinical practice and in a research setting, consensus is necessary on how to interpret these data: specifically, which values should be targeted and whether these values correlate with a reduction in complications. Arguably, the first step required to validate TIR as an outcome measure is to examine its correlation with the current standard. Based on the current international consensus of A1C of 7.0% (53 mmol/mol) as the optimal target for most people with diabetes (4,20), Beck et al. (21) analyzed four clinical trials, including a total of 545 adults with type 1 diabetes. Examining a range of A1C values associated with TIR (70–180 mg/dL [3.9–10.0 mmol/L]), they found that 70% TIR reflected an A1C of 7% (53 mmol/mol), and 50% TIR reflected an A1C of 8% (64 mmol/mol). Importantly, this analysis found that a 10% increase in TIR correlated with an approximate A1C reduction of 0.5% (5 mmol/mol). Vigersky and McMahon (22) subsequently evaluated data from 18 studies with paired A1C and TIR data using linear regression analysis and Pearson correlation. This work demonstrated a strong correlation between TIR and A1C.
Despite a 2017 joint statement by the European Association for the Study of Diabetes (EASD) (23) and the American Diabetes Association (ADA) suggesting optimal values for TIR, there was no formally adopted international consensus until 2019 (2). Battelino et al. (2) convened a multinational expert group to provide consensus on optimal targets for CGM and TIR (Figure 1). It was agreed that optimal glycemic values considered to be in range should be 70–180 mg/dL (3.9–10.0 mmol/L), with glycemic measurements below this range considered as time below range (TBR) and values above this range considered to be time above range (TAR). Within these categories, TBR and TAR were further split into two numerical categories each to quantify the degree of hypoglycemia and hyperglycemia, respectively. TBR is categorized into level 1 hypoglycemia (54–69 mg/dL [3.0–3.8 mmol/L]) and level 2 hypoglycemia (<54 mg/dL [<3.0 mmol/L]). TAR is categorized as level 1 hyperglycemia (181–250 mg/dL [10.1–13.9 mmol/L]) and level 2 hyperglycemia (>250 mg/dL [>13.9 mmol/L]).
A TIR >70% for most people living with diabetes was agreed as offering optimal protection from long-term complications. In individuals who are at higher risk of hypoglycemia, 50% TIR has been adopted as a target, accepting the need for more TAR to avoid harmful hypoglycemia. Conversely, in pregnancy, a stricter in-range target has been specified for optimal glycemic control (63–140 mg/dL [3.5–7.8 mmol/L]), with a goal of 70% TIR. Of note, TIR should not be used alone in pregnancy, as it is possible to have an acceptable TIR without meeting accepted fasting and preprandial target values (2).
The disabling consequences of hypoglycemia are well known to people with diabetes and those who care for them. They can expect on average to have an episode of severe hypoglycemia (requiring external assistance) approximately once every 10 years (24). Hypoglycemia is known to activate the sympathetic nervous system and trigger platelet aggregation (25). Furthermore, the psychological impact of hypoglycemia cannot be minimized, as fear of hypoglycemia limits optimization of insulin therapy. CGM provides a meaningful target to reduce hypoglycemia. The EASD/ADA joint statement recommended a target of <4% at TBR, with <1% at TBR level 2 as optimal for most people. In higher-risk groups, <1% TBR at any level was recommended. In temporal values, this consensus aims for most people with type 1 diabetes to aim for >16 hours, 48 minutes per day in the range of 70–180 mg/dL; <1 hour at values <70 mg/dL; and <15 minutes at values <54 mg/dL (2).
The concept of TIR is thus becoming a part of the everyday management of diabetes as CGM becomes a standard of care. TIR targets were included in the ADA’s Standards of Medical Care in Diabetes—2020 (26). Clinical trials and research must have outcomes that are meaningful for people with diabetes and clinicians. It therefore follows that, as the availability and use of CGM increases in clinical practice, research outcomes must follow. However, before TIR is widely accepted as an additional trial end point or even as an alternative to A1C, its validity to predict increases (or decreases) in clinical outcomes as a result of an intervention must be considered.
Validity of TIR as a Glycemic Parameter in Clinical Trials
As we have seen, TIR is closely correlated with A1C and may add important additional information. However, it was not available at the time of the key outcome trials that demonstrated reduction of complications with intensive glucose control. Hence, there has been a need to “reverse-validate” it against long-term clinical outcomes. In a retrospective analysis of 1,440 participants in the Diabetes Control and Complications Trial (DCCT), Beck et al. (27) used quarterly seven-point daily blood glucose measurements to derive an estimate of TIR, using the percentage of readings between 70 and 180 mg/dL. TIR was strongly correlated with the development of microvascular complications; fewer than 5% of those with TIR >70% developed retinopathy, compared with 58% of those who had a TIR of <10%. Indeed, there was a steady rise in the incidence of retinopathy with increasingly lower TIR. For each 10% reduction in TIR, the hazard ratio for retinopathy rose by 64%. Moreover, incident microalbuminuria was reported in 27% of those with <10% TIR compared with 3% in those with >70% TIR. It should be noted, however, that in the DCCT, even in the intensive control group, TIR was only 52% (vs. 31% in the conventional group), implying the potential for even greater microvascular risk reduction with >70% TIR (3,25). Importantly, the TIR validation data in this study were taken from seven-point glycemic profiles on only 4 days each year, arguably diminishing the ability to reflect longer-term TIR (27). However, in an observational study involving 3,262 people with type 2 diabetes, there was a negative correlation between CGM-derived TIR and retinopathy severity such that lower TIR predicted diabetic eye disease, even after controlling for potentially confounding variables (28).
Few data exist on the association between TIR and macrovascular complications. Hirsch et al. (29) recently suggested that analysis of data from the DCCT’s Epidemiology of Diabetes Interventions and Complications follow-up study would likely extend to macrovascular outcomes, but such an analysis has not yet been published.
Work in cohorts of individuals with type 2 diabetes has examined the association between TIR and carotid intimamedia thickness (CIMT) as a surrogate marker of atherosclerosis and hence macrovascular complications (30). In a cross-sectional analysis of 2,215 people with type 2 diabetes, prevalence of an abnormally high mean CIMT (≥1.0 mm) was higher in those with lower TIR (P <0.001). After adjustment for cardiovascular risk factors, each 10% increase in TIR was associated with a 6.4% reduction in risk of having an abnormal CIMT (30). The association between TIR and cardiovascular mortality in people with type 2 diabetes has also been investigated in 6,225 adults (mean age 61.7 years) with type 2 diabetes followed up throughout 6.9 years after baseline assessment of TIR by CGM. Participants were stratified into TIR categories of >85, 71–85, 51–70, and ≤50%. A Cox proportional hazards model was then used to determine the association with all-cause and cardiovascular mortality. The multivariable-adjusted hazard ratios associated with different levels of TIR were 1.00, 1.23 (95% CI 0.98–1.55), 1.30 (95% CI 1.04–1.63), and 1.83 (95% CI 1.48–2.28), respectively, for all-cause mortality (P for trend <0.001) and 1.00, 1.35 (95% CI 0.90–2.04), 1.47 (95% CI 0.99–2.19), and 1.85 (95% CI 1.25–2.72) for CVD mortality (P for trend 0.015) (31). These data provide supporting evidence of the association between TIR and macrovascular complications, albeit limited to type 2 diabetes.
How Should CGM Be Used in Clinical Trials?
Using TIR as a research outcome measure may seem to have obvious advantages, but it also raises some practical, logistical, and even ethical considerations for study design. If we are to move toward using TIR as a research outcome measure in place of, or in addition to, A1C, we must consider the implications that this change will have on trial design. Having considered the validity of TIR as a predictor of long-term complications, we now examine some important practical considerations when adopting TIR as a trial outcome measure.
Ethics
Access to CGM technology has been shown to be associated with improved clinical outcomes. Thus, as CGM increasingly becomes a standard of care, it can no longer be ethically acceptable to use masked CGM purely to acquire outcome data (i.e., to record events such as nocturnal hypoglycemia that could be potentially harmful to participants with no means of alerting participants in real time). Instead, access to CGM can be offered during the trial with an additional masked CGM worn to record the study’s TIR end points. Appropriate software with this functionality is available from some manufacturers. It should be noted as well that, as CGM increasingly becomes a standard of care, it will become increasingly less acceptable to provide access during a trial but then to withdraw access at trial cessation.
Use of CGM in Clinical Practice
Several previous trials have demonstrated A1C reduction with CGM, including the Juvenile Diabetes Research Foundation Continuous Glucose Monitoring study (−6 mmol/mol [0.6%]) (8), the GOLD study (−5 mmol/mol [0.5%]) (9), and the DIAMOND study (−7 mmol/mol [0.6%]) (10). Most recently, in an observational study by Tyndal et al. (32) in a Scottish population, intermittently scanned CGM monitoring was associated with a 7 mmol/mol (0.6%) reduction in A1C. Thus, as there is increasing prevalence of CGM in clinical practice, trials that aim to have findings relevant (generalizable) to clinical practice must include both CGM users and (at least for now) nonusers of CGM. If a trial aims to evaluate the effect of an intervention on TIR and real-time (i.e., unmasked) CGM data are available to participants, there will likely be a study, or “Hawthorne,” effect in the subset introduced to CGM specifically for the trial. In addition, some participants may begin to use CGM during a trial. One solution is to offer CGM 3 months before randomization to all and then stratify randomization for pre-trial CGM use. This strategy will also allow equilibration of A1C, if retained as a study outcome. This process will, of course, affect study timelines and therefore costs.
Sample Size
Including CGM users with baseline glycemic control closer to target in trials will require sample size to be inflated to demonstrate whether a clinically significant difference is to be detected from the intervention. This requirement has implications for study costs and the feasibility of recruitment.
Recruitment
Using an additional masked CGM to record TIR as a trial outcome may have recruitment implications in that individuals already using CGM may not wish to carry a second receiver device. However, some CGM models in development have sufficient memory to not require a receiver device, and these systems will be an ideal solution to this issue.
Data Standards
Another important question relates to the quantity of data required to provide clinically valid results. There must be an agreed-upon acceptable minimum data level to ensure that clinically valid TIR results are obtained. An international consensus has yet to be reached regarding the minimum data requirement for studies with TIR as an outcome. However, there are already examples of good practice. InRange (33) is a trial comparing insulin glargine 300 units/mL with insulin degludec 100 units/mL, with TIR as the primary outcome (measured using the Dexcom G6 Pro CGM system; Dexcom, San Diego, CA,) during weeks 0–3 and 9–12 (Figure 2). Because a minimum of 10 days of data are required, participants are asked to wear a CGM sensor for 20 days to ensure that enough usable data are captured. For any given day’s data to be considered valid, >80% of each 24-hour period must be included, with no more than two consecutive hours of missing data. In the absence of an international consensus, this strategy seems to be a reasonable approach.
Data Collection and Storage
The practicalities of data collection and storage must also be considered. Who monitors the quality of CGM data to ensure that it meets minimum standards in real time during the study period? This question is important because failing to collect CGM data in an acceptable way will have significant implications for trial end points. Data Monitoring Committees will require access at a participant level sufficient to ensure ongoing data quality. As opposed to a single blood test such as A1C as an end point, the volume of data from CGM and how best to store and back them up are other considerations for trials using TIR as an outcome. The security and storage of large volumes of trial participant data are important considerations that must be addressed by principal investigators. This problem is theoretically solvable with secure Cloud-based electronic storage but does have financial implications, including for archiving, for which data compression may be an option (34,35).
Conclusion
TIR is emerging as a clinically and scientifically meaningful tool in both clinical practice and research. CGM technology and the plethora of data generated offer an opportunity for clinicians and research trialists to gain insights into the glycemic effects of novel treatments and technologies in a way never possible with A1C. However, this volume of participant data also brings challenges. Although there remains no definitive international consensus on minimum TIR data collection and practical, financial, and recruitment challenges for trial design, these issues can be overcome. It may be time for an international consensus on CGM use in clinical trials to address the questions raised in this article. As clinical practice marches toward the acceptance of TIR as a more meaningful measure for people with diabetes, trial design must go beyond adaptation to change and seize the opportunities offered by TIR as a trial outcome.
Article Information
Duality of Interest
The authors have received nonfinancial support (donation of study materials for an investigator-sponsored trial) from Dexcom. J.R.P. has received personal fees from Abbott. No other potential conflicts of interest relevant to this article were reported.
Author Contributions
J.G.T. wrote the first draft of the manuscript. J.G.B. and J.R.P. reviewed and edited the manuscript and contributed to subsequent drafts. J.R.P. is the guarantor of this work and, as such, had full overview of the manuscript and takes responsibility for the integrity of the manuscript and the accuracy of the analysis.
Prior Presentation
This article was based on a lecture given by J.R.P. at the American Diabetes Association’s virtual 80th Scientific Sessions in June 2020.