OBJECTIVE

To evaluate the ability of trained nonphysician retinal imagers to perform diabetic retinopathy (DR) evaluation at the time of ultrawide field retinal (UWF) imaging in a teleophthalmology program.

RESEARCH DESIGN AND METHODS

Clinic patients with diabetes received Joslin Vision Network protocol retinal imaging as part of their standard medical care. Retinal imagers evaluated UWF images for referable DR at the time of image capture. Training of the imagers included 4 h of standardized didactic lectures and 12 h of guided image review. Real-time evaluations were compared with standard masked gradings performed at a centralized reading center.

RESULTS

A total of 3,978 eyes of 1,989 consecutive patients were imaged and evaluated. By reading center evaluation, 3,769 eyes (94.7%) were gradable for DR, 1,376 (36.5%) had DR, and 580 (15.3%) had referable DR. Compared with the reading center, real-time image evaluation had a sensitivity and specificity for identifying more than minimal DR of 0.95 (95% CI 0.94–0.97) and 0.84 (0.82–0.85), respectively, and 0.99 (0.97–1.00) and 0.76 (0.75–0.78), respectively, for detecting referable DR. Only three patients with referable DR were not identified by imager evaluation.

CONCLUSIONS

Point-of-care evaluation of UWF images by nonphysician imagers following standardized acquisition and evaluation protocols within an established teleophthalmology program had good sensitivity and specificity for detection of DR and for identification of referable retinal disease. With immediate image evaluation, <0.1% of patients with referable DR would be missed, reading center image grading burden would be reduced by 60%, and patient feedback would be expedited.

Patients with diabetes require lifelong ophthalmic care that generally includes an annual retinal evaluation (1). Given the rapidly growing population affected by diabetes, a 20-year estimate of >2.7 million eyes worldwide will need to be evaluated each day just to fulfill these needs (2). This enormous task is unlikely to be accomplished by the current approaches of diabetes eye care programs. Despite more than one decade of research, no real-time, fully automated retinal image analysis system is currently in active clinical use (2). Until such capability exists, other approaches to speed efficiency of current programs without sacrificing accuracy are urgently needed.

The primary care or endocrinology clinic is perhaps the ideal environment for the retinal imaging of patients with diabetes. Previous studies have established that people with diabetes will routinely present to their primary care physician or endocrinologist but that only 60% will adhere to the minimum recommended annual eye care evaluation guidelines (3). Studies suggest that ophthalmic counseling during endocrinology visits may improve diabetes control (4). Furthermore, at present, commercially available retinal imaging devices require trained personnel for recording patient information and acquiring retinal images.

Studies have shown that accurate assessment of vision-threatening diabetic retinopathy (DR) can be determined at the time of nonmydriatic 30° and 45° retinal photography by trained and certified retinal imagers in a DR telemedicine program (5). At the Joslin Diabetes Center, all telemedicine retinal photography transitioned to ultrawide field imaging in 2012. Ultrawide field retinal imaging (UWFI) uses scanning laser ophthalmoscopy and an ellipsoidal mirror to allow the nonmydriatic acquisition of high-resolution retinal images that encompass more than double the total retinal surface area captured with mydriatic standard 30° 7-field Early Treatment Diabetic Retinopathy Study (ETDRS) photography. The image acquisition time with UWFI is less than one-half that of ETDRS photography, even when the time for dilation is excluded (6). Independent groups have demonstrated that UWFI compares favorably with dilated retinal examination by a retina specialist, ETDRS photography, and various retinal imaging protocols for determining DR severity (6,7). Implementation of UWFI reduces the ungradable rate by 71% for DR and 56% for diabetic macular edema (DME) to <3% and <4%, respectively (8). Compared with traditional nonmydriatic fundus photography, image evaluation time has been reduced by 28% (8). Furthermore, UWFI has identified additional peripheral retinal lesions suggesting a more severe level of DR in approximately 10% of eyes (8,9). Given these potential efficiency benefits, we prospectively evaluated the ability of trained imagers to perform ultrawide field point-of-care DR evaluations to determine the presence or absence of DR or referable DR at the time of imaging.

The Joslin Vision Network (JVN) is an ocular telehealth program for DR developed at the Joslin Diabetes Center and has been in continuous clinical operation since 1998 (10,11). The JVN follows a strict protocol for acquiring nonmydriatic retinal images and for grading and reporting the level of DR. Patients with diabetes have nonmydriatic JVN imaging as part of their routine physical examinations at the Adult Diabetes Clinic of the Joslin Diabetes Center. Images are acquired using nonmydriatic ultrawide field scanning laser retinal imaging (Optos P200MA and P200C; Optos plc, Dunfermline, Fife, U.K.). The protocol for retinal imaging has been previously described (6,8) and involves the acquisition of nonsimultaneous stereoscopic 100° and 200° ultrawide field image pairs centered on the macula of each eye. All images are acquired through undilated pupils (see Supplementary Fig. 1 for the study protocol flowchart).

At the time of imaging, JVN imagers prospectively performed two levels of DR assessment of the ultrawide field images: 1) American Telemedicine Association (ATA) category 1 assessment to identify patients with no or minimal DR (ETDRS level ≤20) versus those with DR more severe than ETDRS level 20 and 2) ATA category 2 assessment to identify patients with referable DR (moderate or worse levels of nonproliferative DR [NPDR] [ETDRS level ≤43], proliferative DR [PDR] [ETDRS level ≤61], or any level of DME) (12). Imagers were permitted to manipulate the color, brightness, contrast, gamma, or magnification of the images but did not view images stereoscopically. To suspect retinal thickening without stereoscopic viewing, imagers relied on identifying hard exudates or microaneurysms within 3,000 μm from the center of the macula as surrogate markers for DME. Reading center evaluation of DME was performed stereoscopically on all images.

One of the trained JVN imagers was a college graduate with a bachelor of arts degree in psychology, and the second was a certified medical assistant. Neither imager had prior ophthalmology or eye care experience other than nonmydriatic retinal imaging. Both imagers had been acquiring ultrawide field images since 1 April 2012, but neither had evaluated retinal images at the reading center. The study was conducted from 1 October 2013 to 30 September 2014. One imager had 3.5 years and the other 6 months of prior retinal imaging experience using a Topcon TRC-NW6S (Topcon Medical Systems, Inc., Oakland, NJ) nonmydriatic fundus camera. Both imagers had 18 months of UWFI experience using the Optos P200MA and P200C devices. Each retinal imager received a validated standardized method of certification and training that included 4 h of didactic lectures and 12 h of guided review of DR images. Before the initiation of this study, imagers underwent a 1-month provisional period during which questionable assessments were reviewed by the supervising retina specialist with each imager. Monthly 1-h meetings were convened to discuss questionable retinal images and findings under the supervision of a retina specialist or the senior retinal specializing optometrist. For the purposes of this study, the assessment of the imager was not changed after discussions with the retina specialists.

All images were subsequently graded independently by trained optometrists certified in JVN grading at a centralized reading center and masked to the imagers’ evaluation. All reading center evaluations were overread by a supervising retinal physician who was also masked to imager evaluations. Identical dual-monitor workstations were used at both the imaging and the reading stations. The specifications of the reading center displays have been reported previously and are calibrated biannually (8). All findings were recorded on specifically designed templates. The detailed protocol for evaluating ultrawide field images has been previously described and has shown substantial agreement with the grading of standard dilated 7-field ETDRS photography (6). For both imagers and reading center graders, an ungradable image for DR was defined as inadequate photographic quality or media opacity preventing the determination of the presence or absence of DR. If at least one or more disc areas of retina were visible in each ultrawide field–equivalent ETDRS-defined photographic field and that area was free of DR lesions (hemorrhages and/or microaneurysms [H/Ma], venous beading, intraretinal microvascular abnormalities [IRMA], new vessels on the disc, new vessels elsewhere in the retina), the grading was deemed DR absent. If DR lesions were present in the unobscured part of the field, DR was recorded as present regardless of the extent of the field obscured, and that severity of DR lesions was assumed to exist under the obscured portions of the field. Both the imagers and the reading center graders were able to magnify and adjust the image color, contrast, brightness, and gamma correction for each image.

The study design was consistent with the tenets of the Declaration of Helsinki, and the Committee on Human Studies of the Joslin Diabetes Center approved the research protocol. The conduct of the study complied with the Health Insurance Portability and Accountability Act.

Statistical Analysis

Nonparametric analyses (Wilcoxon rank sums) and Pearson correlations were used to compare distributions of continuous variables between groups. The χ2 test was used to compare frequencies of categorical variables. When DR severity was evaluated per patient rather than per eye, the more severe level of DR and DME present in either eye was used as the severity level present in the patient. If one eye was ungradable, the level of DR and DME present in the gradable eye was considered the severity level of DR and DME present in the patient. The presence or absence of DR (ATA category 1 evaluation) and the presence or absence of referable DR were tested using sensitivity and specificity percentages and positive and negative predictive values. Eyes classified as ungradable were excluded from the analysis. All analyses were performed using SAS version 9.3 software (SAS Institute Inc., Carey, NC).

During the study period, 3,978 eyes of 1,989 consecutive patients were evaluated. The mean age was 51.6 ± 17.4 (range 18–95) years, mean duration of diabetes was 13.7 ± 10.9 (0–64) years, and 45.1% were female and 78.1% white. The distribution of DR severity based on the reading center evaluation is presented per eye and per patient in Table 1.

Table 1

Gradable rates and distribution of DR severity

Per eye (n = 3,978)Per patient* (n = 1,989)
Gradable for severity of DR at reading center 3,769 (94.7) 1,931 (97.1) 
Gradable for presence of DME at reading center 3,704 (93.1) 1,912 (96.1) 
Presence of referable DR at reading center 580 (15.3) 344 (17.8) 
DR severity   
 No DR 2,393 (63.5) 1,109 (57.4) 
 Very mild NPDR 491 (13.0) 304 (15.7) 
 Mild NPDR 362 (9.6) 217 (11.2) 
 Moderate NPDR 230 (6.1) 131 (6.8) 
 Severe NPDR 63 (1.7) 37 (1.9) 
 Very severe NPDR 4 (0.1) 2 (0.1) 
 PDR 214 (5.7) 124 (6.4) 
 PDR with high-risk characteristics 9 (0.2) 7 (0.4) 
DME present 262 (7.1) 178 (9.3) 
Per eye (n = 3,978)Per patient* (n = 1,989)
Gradable for severity of DR at reading center 3,769 (94.7) 1,931 (97.1) 
Gradable for presence of DME at reading center 3,704 (93.1) 1,912 (96.1) 
Presence of referable DR at reading center 580 (15.3) 344 (17.8) 
DR severity   
 No DR 2,393 (63.5) 1,109 (57.4) 
 Very mild NPDR 491 (13.0) 304 (15.7) 
 Mild NPDR 362 (9.6) 217 (11.2) 
 Moderate NPDR 230 (6.1) 131 (6.8) 
 Severe NPDR 63 (1.7) 37 (1.9) 
 Very severe NPDR 4 (0.1) 2 (0.1) 
 PDR 214 (5.7) 124 (6.4) 
 PDR with high-risk characteristics 9 (0.2) 7 (0.4) 
DME present 262 (7.1) 178 (9.3) 

Data are n (%). Referable DR is defined as moderate NPDR or worse, PDR, or presence of DME.

*When DR severity was evaluated per patient rather than per eye, the more severe level of DR and DME present in either eye was used as the severity present in the patient. If one eye was ungradable, the level of DR and DME present in the gradable eye was considered the level of DR and DME present in the patient.

Identification of Ungradable Images

The identification of ungradable images is integrated into the protocol of image acquisition because suboptimal images are retaken immediately. The rate of ungradable images for identifying the presence of DR as determined by the point-of-care imagers was 2.1% per eye and 0.9% per patient. Images deemed ungradable for DR severity at the reading center were 5.3% per eye and 2.9% per patient.

Identification of Retinopathy

Based on masked standardized reading center evaluation, more than minimal DR was present in 885 eyes (23.5%). Based on imager evaluation, the sensitivity for determining the presence of more than minimal DR was 0.95 (95% CI 0.94–0.97) and the specificity was 0.84 (0.82–0.85). Imager evaluation for DR resulted in a false-negative determination for 43 eyes (1.1%) with a false-negative rate [false negative / (true positive + false negative)] of 0.16. Of these, 31 (72.1%) had mild NPDR; 5 (11.6%) had moderate NPDR; 2 (4.7%) had macular edema; and 5 (11.6%) had DR, but the severity could not be determined. In terms of DR on a per-patient level, imager grading had a sensitivity of 0.96 (0.94–0.98) and specificity of 0.77 (0.75–0.80). Imager evaluation resulted in a false-negative determination for 21 (1.1%) patients, with a false-negative rate of 0.05. Of these, 13 (61.9%) had mild NPDR; 1 (4.8%) had moderate NPDR; 2 (9.5%) had mild NPDR and macular edema; and 5 (23.8%) had DR, but the severity could not be determined. A summary of sensitivity, specificity, true-positive, false-positive, true-negative, and false-negative results at both the eye and the patient level is presented in Table 2. Individual ultrawide field images of all eyes with false-negative results with annotation of specific DR lesions are presented in Fig. 1. The lesions resulting in false-negative referable DR findings were primarily subtle early IRMA less than ETDRS standard photograph 8A (five eyes) and hard exudates less than ETDRS standard photograph 3 (three eyes).

Table 2

Summary statistics per eye and per patient for point-of-care image evaluation for identification of DR and referable DR

ATA category 1 evaluation (no or minimal DR vs. more than minimal DR)Referable DR*
Per eye
(n = 3,978)Per patient
(n = 1,989)Per eye
(n = 3,978)Per patient
(n = 1,989)
Total gradable images 3,799 (95.5) 1,944 (97.7)§ 3,758 (94.5) 1,926 (96.4) 
Sensitivity 0.95 (0.94–0.97) 0.96 (0.94–0.98) 0.99 (0.97–0.99) 0.99 (0.97–1.00) 
Specificity 0.84 (0.82–0.85) 0.77 (0.75–0.80) 0.76 (0.75–0.78) 0.69 (0.66–0.71) 
PPV 0.66 (0.63–0.68) 0.63 (0.60–0.66) 0.43 (0.33–0.37) 0.40 (0.37–0.44) 
NPV 0.98 (0.98–0.99) 0.98 (0.97–0.99) 1.00 (0.99–1.00) 1.00 (0.99–1.00) 
FDR 0.34 (0.32–0.37) 0.37 (0.34–0.40) 0.57 (0.54–0.60) 0.59 (0.55–0.63) 
FOR 0.02 (0.01–0.02) 0.02 (0.01–0.03) 0.002 (0.00–0.01) 0.002 (0.00–0.01) 
FPR 0.16 (0.15–0.18) 0.22 (0.20–0.24) 0.24 (0.22–0.25) 0.31 (0.29–0.33) 
FNR 0.05 (0.04–0.07) 0.04 (0.03–0.06) 0.01 (0.01–0.03) 0.01 (0.00–0.03) 
ATA category 1 evaluation (no or minimal DR vs. more than minimal DR)Referable DR*
Per eye
(n = 3,978)Per patient
(n = 1,989)Per eye
(n = 3,978)Per patient
(n = 1,989)
Total gradable images 3,799 (95.5) 1,944 (97.7)§ 3,758 (94.5) 1,926 (96.4) 
Sensitivity 0.95 (0.94–0.97) 0.96 (0.94–0.98) 0.99 (0.97–0.99) 0.99 (0.97–1.00) 
Specificity 0.84 (0.82–0.85) 0.77 (0.75–0.80) 0.76 (0.75–0.78) 0.69 (0.66–0.71) 
PPV 0.66 (0.63–0.68) 0.63 (0.60–0.66) 0.43 (0.33–0.37) 0.40 (0.37–0.44) 
NPV 0.98 (0.98–0.99) 0.98 (0.97–0.99) 1.00 (0.99–1.00) 1.00 (0.99–1.00) 
FDR 0.34 (0.32–0.37) 0.37 (0.34–0.40) 0.57 (0.54–0.60) 0.59 (0.55–0.63) 
FOR 0.02 (0.01–0.02) 0.02 (0.01–0.03) 0.002 (0.00–0.01) 0.002 (0.00–0.01) 
FPR 0.16 (0.15–0.18) 0.22 (0.20–0.24) 0.24 (0.22–0.25) 0.31 (0.29–0.33) 
FNR 0.05 (0.04–0.07) 0.04 (0.03–0.06) 0.01 (0.01–0.03) 0.01 (0.00–0.03) 

Data are n (%) or n (95% CI); FDR, false-discovery rate; FNR, false-negative rate; FOR, false-omission rate; FPR, false-positive rate; NPV, negative predictive value; PPV, positive predictive value.

*Referable DR is defined as moderate NPDR or worse, PDR, or presence of DME.

†When DR severity was evaluated per patient rather than per eye, the more severe level of DR and DME present in either eye was used as the severity present in the patient. If one eye was ungradable, the level of DR and DME present in the gradable eye was considered the level of DR and DME present in the patient.

‡Images gradable at both the point-of-care evaluation by retina imagers and the reading center.

§In 41 eyes, the presence of DR was gradable, but the severity of DR could not be determined (e.g., definite signs of DR such as H/MA, but the disc and/or macula were obscured or the image quality in one or more quadrants did not allow for assessment of retinal lesions).

∥In eight patients, the presence of DR was gradable, but the severity of DR could not be determined (e.g., definite signs of DR such as H/MA, but the disc and/or macula were obscured or image quality in one or more quadrants did not allow for assessment of retinal lesions).

Figure 1

Images from all seven eyes with false-negative point-of-care results for referable DR (defined as reading center evidence of moderate NPDR or worse or any level of DME) compared with standard masked reading center evaluation. Each ultrawide field image is inlaid with a corresponding magnified section of the boxed area showing the missed lesion. Solid arrowheads indicate IRMA and open arrowheads indicate hard exudates within 1,500 μm of the foveal center. AE: Eyes with moderate NPDR due to IRMA less than ETDRS standard photograph 8A. F and G: Eyes with hard exudates less than ETDRS standard photograph 3.

Figure 1

Images from all seven eyes with false-negative point-of-care results for referable DR (defined as reading center evidence of moderate NPDR or worse or any level of DME) compared with standard masked reading center evaluation. Each ultrawide field image is inlaid with a corresponding magnified section of the boxed area showing the missed lesion. Solid arrowheads indicate IRMA and open arrowheads indicate hard exudates within 1,500 μm of the foveal center. AE: Eyes with moderate NPDR due to IRMA less than ETDRS standard photograph 8A. F and G: Eyes with hard exudates less than ETDRS standard photograph 3.

Close modal

Identification of Referable Retinopathy

Based on masked standardized reading center evaluation, referable DR was present in 372 eyes (15.9%). Based on imager evaluation, the sensitivity for determining the presence of referable DR was 0.99 (95% CI 0.97–1.00) and specificity 0.76 (0.75–0.78). Seven (1.9%) eyes were identified as having false-negative results by imager evaluation for referable DR. Moderate NPDR was present in five eyes, and two eyes had macular edema. Considering the presence of referable DR on a per-patient level, sensitivity was 0.99 (0.97–1.00) and specificity 0.69 (0.66–0.71). Three (1.6%) patients were identified as having false-negative results by imager evaluation for referable DR, one with moderate NPDR and two with macular edema. A summary of these results at both the eye and the patient levels is presented in Table 2.

Comparison With Existing Automated Algorithms

Table 3 relates the results of point-of-care imager grading for the identification of DR (13,14) and referable DR (15) to prior publications using commercially available automated algorithms. The imager point-of-care evaluation compared favorably in all respects, even though the current study population of 3,978 eyes was smaller than the number tested by the other modalities. Sensitivity, specificity, and negative and positive predictive values were similar to or better than those reported for the automated algorithms (1315). However, different methods of manual and automated grading as well as different image sets were used by the various studies, so these comparisons are presented as a reference and generally do not represent comparative performance.

Table 3

Results of point-of-care imager grading for the identification of DR and referable DR to prior publications of commercially available automated algorithms

Point-of-care imager evaluationiGradingM*IDx-DRRetmarkerDR
Sensitivity 0.99 (0.97–0.99) 0.98 (0.97–0.99) 0.97 (0.94–0.99) 0.96 (0.94–0.98) 
Specificity 0.76 (0.75–0.78) 0.41 0.59 (55.7–63.0) 0.52 (0.50–0.53) 
PPV 0.43 (0.33–0.37) — 0.49 (0.35–0.44) — 
NPV 1.00 (0.99–1.00) — 0.99 (0.97–1.00) — 
No. eyes evaluated 3,978 33,535 16,670 5,386 
Point-of-care imager evaluationiGradingM*IDx-DRRetmarkerDR
Sensitivity 0.99 (0.97–0.99) 0.98 (0.97–0.99) 0.97 (0.94–0.99) 0.96 (0.94–0.98) 
Specificity 0.76 (0.75–0.78) 0.41 0.59 (55.7–63.0) 0.52 (0.50–0.53) 
PPV 0.43 (0.33–0.37) — 0.49 (0.35–0.44) — 
NPV 1.00 (0.99–1.00) — 0.99 (0.97–1.00) — 
No. eyes evaluated 3,978 33,535 16,670 5,386 

Each DR grading program referenced was evaluated using different image set and grading methods. The differences in the performance measures may be due to differences in the image set, grading methods, algorithm threshold, or a combination of various factors. These are presented as a reference and generally do not represent comparative performance. NPV, negative predictive value; PPV, positive predictive value.

*Fleming et al. (13).

†Abràmoff et al. (15).

‡Oliveira et al. (14).

Image Evaluation Time at the Time of Imaging

The research study was conducted in an active telemedicine program that evaluates nearly 4,000 eyes each year (8) and where accurate measurement of image evaluation time by the imagers is not routinely feasible. However, in a subset of 118 consecutive patients, imager evaluation was specifically timed without any clinical interruptions. Imagers were aware that they were being timed and were instructed to complete the image evaluation without interruption immediately following the conclusion of the patient encounter. Image evaluation time was recorded electronically for the following steps: image retrieval, image display, image evaluation, and recording of findings on the specifically designed template. At least eight images from each patient comprising 100° and 200° image pairs of each eye were evaluated (6,8,16). The mean point-of-care image evaluation time by retinal imagers was 77.7 ± 39.6 s. Of note, the imagers evaluated images only for the presence or absence of DR; individual DR lesions and the ETDRS severity of DR were neither assessed nor recorded by the imagers. Furthermore, the environment where the images were evaluated was optimized for reading ultrawide field images as described in 2research design and methods.

Point-of-care DR evaluation at the time of UWFI by trained and certified nonphysician retinal imagers can reliably differentiate no to mild disease from more severe disease and can reliably identify referable DR. This approach may permit effective triage of patients with minimal retinopathy to less urgent follow-up and of those with referable disease to prompt ophthalmic care. Eyes with no or minimal DR may not require further evaluation by a centralized reading center, thereby potentially reducing reading center load by ∼60%. In addition, at least for patients with no or minimal DR, results might be provided at the conclusion of the imaging session, speeding patient feedback. Such a process might substantially increase efficiency of DR telemedicine programs to meet the huge ongoing medical need until real-time automated image grading algorithms are effectively incorporated into DR care.

It is important that telemedicine programs for DR conform to the recommended minimum 80% sensitivity and 95% specificity for detecting mild retinopathy. The methods described here do not replace a reading center evaluation but rather optimize the current workflow by reducing the overall number of images needing to be evaluated by the reading center by using imagers to provide a real-time initial assessment on the presence or absence of DR. In this model, it is essential for patient safety that sensitivity approach 100% to ensure that all patients with disease are accurately identified. In most diabetes populations, >50% of patients will have no or minimal disease (ETDRS level <20). Imager grading at the point of care can potentially identify these cases with a sensitivity that approaches 100%. Eyes identified by the imagers as having no or minimal DR would not require further reading center evaluation. Eyes identified by the imagers with ETDRS level ≥20 would be further evaluated. This model optimizes the workflow by reducing the number of images evaluated, maintaining a very high sensitivity and allowing the reading center to fully evaluate all cases identified as DR. The grading center sensitivity and specificity for grading nonmydriatic ultrawide field images compared with mydriatic 7-field ETDRS photography for DR have been reported and exceed the current recommendations (ETDRS level 20 sensitivity 0.99 and specificity 1.00, ETDRS level ≥43 sensitivity 0.95 and specificity 0.94) (7). The agreement of nonmydriatic UWFI and gold standard mydriatic 7-field ETDRS photography has been published by independent groups (6,7). There is near-perfect agreement in determining DR severity (weighted κ 0.85) and good agreement in DME severity (weighted κ 0.66). There is comparatively less agreement of UWFI in evaluation of DME severity, but the rate meets or exceeds published agreement rates using multiple imaging devices and methodologies.

Imagers did not evaluate for specific DR severity or each individual DR lesion but rather provided an assessment only of the presence or absence of DR, which may reflect the less critical assessment for gradability by the imagers. The ungradable rate per patient (2.9% vs. 2.9%) and per eye (5.4% vs. 5.3%) remained constant in this study that evaluated all imaging encounters from 1 October 2013 to 30 September 2014 compared with a previous study that evaluated all imaging encounters from 1 April 2012 to 1 November 2012 (6).

In the current study, imagers did not assess the severity of macular edema, or the presence or severity of individual DR lesions. Given that the speed of assessment is important, they only assessed whether images were potentially gradable, whether DR was present, and whether referable DR was present. However, we have previously reported on the individual lesion sensitivity and specificity comparing nonmydriatic ultrawide field and ETDRS photography, finding that sensitivity is least for small IRMA and new vessels (7).

Key issues for DR telemedicine programs include lowering the rates of false-positive and false-negative image evaluations. Lowering the false-positive rate reduces burden on more formal reading resources and specialist care if the severity warrants a referral. Lowering false-negative rates prevents the program from not appropriately referring a patient for care when it is truly needed. Of the two, the false-negative rate is more important from the patient care standpoint. In this study, the false-negative rate of point-of-care ultrawide field image evaluation was 0.01 for both the presence of referable DR per eye and the presence of referable DR per patient. These low rates suggest that if point-of-care grading was performed and used for referral, for every 500 eyes evaluated in this manner, <1 eye with referable disease would not be identified and triaged to prompt care. No cases of severe NPDR, PDR, or clinically significant macular edema or center-involved DME were missed by this manner of evaluation at the point of care.

Although the assessment of the retinal imagers had lower specificity and lower positive predictive value than formal grading, this finding was incorporated into the design of the methodology to ensure that sensitivity values would approach 1.00 and thus ensure that no patient with retinopathy was missed. All positively identified eyes were subsequently fully evaluated by the reading center. Despite the relatively lower specificity and positive predictive value in this cohort, 57.6% of patients and 63.6% of eyes had no DR, whereas 15.3% of patients and 17.8% of eyes did not have referable DR. As a result, reading center burden could be reduced by ∼60% and 15% if images with no DR or no referable DR, respectively, are excluded from the overall reading queue. Extrapolated to the entire U.S. diabetic population, this would reduce the reading center grading burden by ∼7.8–31 million eyes annually.

This study was not designed for cost-effectiveness analysis; however, the main cost differential between UWFI and standard retinal photography is the cost of the imaging device. In large populations, the efficiency, operational, and patient care benefits may outweigh the increased costs of the UWFI device. Furthermore, the costs of imaging devices are likely to decrease over time with further technological innovations and market competition. Standard nonmydriatic fundus photography requires trained retinal photographers to ensure high-quality retinal images adequate for retinal evaluation. However, even with dedicated retinal photographers, a comparatively higher ungradable rate, which has been reported to be between 13% and 54%, continues to represent a significant barrier to the appropriate identification of disease. In primary care and nonophthalmic settings, the relative ease of UWFI compared with nonmydriatic fundus photography provides a significant advantage. At the Joslin Diabetes Center, ∼20,000 patients are imaged over a 5-year period. A comparison of the ∼$100,000 cost of the ultrawide field device with a $25,000 traditional fundus camera spread over the 5 years results in an additional cost per patient of $3.75. Given the >70% reduction in the ungradable rate, >20% reduction in grading time, improved ease of use, and increased DR identification, this increased cost per patient may be outweighed by the significant benefits provided by UWFI.

Multiple benefits may result from immediate point-of-care imager grading. We previously demonstrated that ungradable images can be readily identified by the imager, allowing immediate recapture of images to improve quality before the patient telemedicine encounter ends (5). The current study demonstrates that the presence of more than minimal DR as well as referable DR can be comprehensively identified. Thus, urgent referrals to appropriate care could be initiated rapidly. Patients without DR are also identified, and a report could be generated at the end of the telemedicine session, eliminating the need for further image evaluation of patients without disease. The 2011 introduction of telemedicine Current Procedure Terminology codes (92227, remote diagnostic retinal imaging) that are now standard in the 2013 Current Procedure Terminology, Fourth Edition (www.cms.gov/apps/physician-fee-schedule/search/search-results.aspx?Y=0&T=0&HT=0&CT=3&H1=92227&M=5) by the Centers for Medicare & Medicaid Services allows a patient report to be generated under physician supervision only if no disease is present, thus potentially supporting this approach (17). In addition, a preliminary report regarding the presence or absence of DR could be provided to the patients at the conclusion of the point-of-care imaging session. This approach would allow for immediate, highly structured, patient-specific education, potentially enhancing the value of the patient encounter from both an ophthalmic and a medical perspective.

There are limitations to this study regarding generalizability of the results. Retinal imagers were under the direct supervision of a retina specialist and received a standardized method of certification and training. Whether additional training might have improved outcomes or how differences in the training approach would alter results is unknown. Furthermore, the investment in training and ongoing oversight must be considered. In this study, imager point-of-care ultrawide field evaluation compared favorably in all respects to published outcomes of automated grading algorithms. However, different methods of manual and automated grading as well as different image sets were used by the various studies. Whether the small performance differences between the methodologies may be attributed to the accuracy of each method or related to the different image sets or approaches studied is unknown. Although implementation of highly sensitive and specific automated grading algorithms would be ideal to address the burden of DR evaluation, until such ideals are realized, the approach detailed in the current study represents an accurate and viable alternative.

In summary, this study demonstrates that appropriately trained and certified imagers following a defined imaging and grading protocol can accurately evaluate images for the presence of either DR or referable DR at the time of UWFI. With a sensitivity and negative predictive value that approaches 1.00, this methodology could result in a substantial reduction of centralized reading center burden and speed delivery of information and education to the patient. Furthermore, the accurate identification of referable DR allows prompt eye care referral, reducing the burden caused by false-positive results. The cost of current UWFI devices remains restrictive in resource-poor settings or in programs that care for only a limited number of patients with diabetes. However, if the findings of the current study are replicated across large and diverse populations and if innovations such as those discussed here are rigorously validated and adopted, the impact on delivery of diabetes eye care could be substantial.

Funding. The study was supported by grant funding from the Center of Integration of Medicine and Innovative Technology to P.S.S. JVN technology was developed at the Joslin Diabetes Center. All the authors were employees of the Joslin Diabetes Center at the time the study was conducted.

Duality of Interest. One of the two Optos P200 instruments used in this study was provided by Optos plc (Dunfermline, Fife, U.K.) to the Joslin Diabetes Center on temporary loan. No other potential conflicts of interest relevant to this article were reported.

Author Contributions. P.S.S. and J.D.C. researched data and wrote the manuscript. A.M.T., J.R., S.R., R.A., D.T., B.P., M.S., and K.T. researched data and reviewed and edited the manuscript. J.K.S. researched data, contributed to the discussion, and reviewed and edited the manuscript. L.P.A. contributed to the discussion and reviewed and edited the manuscript. P.S.S and J.D.C. are the guarantors of this work and, as such, had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Prior Presentation. Parts of this study were presented in abstract form at ATA 2014, Baltimore, MD, 17–19 May 2014, and the Association for Research in Vision and Ophthalmology Annual Meeting, Denver, CO, 3–7 May 2015.

1.
American Diabetes Association. 9. Microvascular complications and foot care. In Standards of Medical Care in Diabetes2015.
Diabetes Care
2015
;
38
(Suppl. 1):
S58
S66
[PubMed]
2.
Silva
PS
,
Cavallerano
JD
,
Aiello
LM
,
Aiello
LP
.
Telemedicine and diabetic retinopathy: moving beyond retinal screening
.
Arch Ophthalmol
2011
;
129
:
236
242
[PubMed]
3.
Schoenfeld
ER
,
Greene
JM
,
Wu
SY
,
Leske
MC
.
Patterns of adherence to diabetes vision care guidelines: baseline findings from the Diabetic Retinopathy Awareness Program
.
Ophthalmology
2001
;
108
:
563
571
[PubMed]
4.
Salti
H
,
Cavallerano
JD
,
Salti
N
, et al
.
Nonmydriatic retinal image review at time of endocrinology visit results in short-term HbA1c reduction in poorly controlled patients with diabetic retinopathy
.
Telemed J E Health
2011
;
17
:
415
419
[PubMed]
5.
Cavallerano
JD
,
Silva
PS
,
Tolson
AM
, et al
.
Imager evaluation of diabetic retinopathy at the time of imaging in a telemedicine program
.
Diabetes Care
2012
;
35
:
482
484
[PubMed]
6.
Silva
PS
,
Cavallerano
JD
,
Sun
JK
,
Noble
J
,
Aiello
LM
,
Aiello
LP
.
Nonmydriatic ultrawide field retinal imaging compared with dilated standard 7-field 35-mm photography and retinal specialist examination for evaluation of diabetic retinopathy
.
Am J Ophthalmol
2012
;
154
:
549
559
[PubMed]
7.
Kernt
M
,
Hadi
I
,
Pinter
F
, et al
.
Assessment of diabetic retinopathy using nonmydriatic ultra-widefield scanning laser ophthalmoscopy (Optomap) compared with ETDRS 7-field stereo photography
.
Diabetes Care
2012
;
35
:
2459
2463
[PubMed]
8.
Silva
PS
,
Cavallerano
JD
,
Tolls
D
, et al
.
Potential efficiency benefits of nonmydriatic ultrawide field retinal imaging in an ocular telehealth diabetic retinopathy program
.
Diabetes Care
2014
;
37
:
50
55
[PubMed]
9.
Wessel
MM
,
Aaker
GD
,
Parlitsis
G
,
Cho
M
,
D’Amico
DJ
,
Kiss
S
.
Ultra-wide-field angiography improves the detection and classification of diabetic retinopathy
.
Retina
2012
;
32
:
785
791
[PubMed]
10.
Aiello
LM
,
Bursell
SE
,
Cavallerano
J
,
Gardner
WK
,
Strong
J
.
Joslin Vision Network Validation Study: pilot image stabilization phase
.
J Am Optom Assoc
1998
;
69
:
699
710
[PubMed]
11.
Sanchez
CR
,
Silva
PS
,
Cavallerano
JD
,
Aiello
LP
,
Aiello
LM
.
Ocular telemedicine for diabetic retinopathy and the Joslin Vision Network
.
Semin Ophthalmol
2010
;
25
:
218
224
[PubMed]
12.
Li HK, Horton M, Bursell SE, et al. Telehealth practice recommendations for diabetic retinopathy, second edition. Telemed J E Health 2011;17:814–837
13.
Fleming
AD
,
Goatman
KA
,
Philip
S
,
Prescott
GJ
,
Sharp
PF
,
Olson
JA
.
Automated grading for diabetic retinopathy: a large-scale audit using arbitration by clinical experts
.
Br J Ophthalmol
2010
;
94
:
1606
1610
[PubMed]
14.
Oliveira
CM
,
Cristóvão
LM
,
Ribeiro
ML
,
Abreu
JR
.
Improved automated screening of diabetic retinopathy
.
Ophthalmologica
2011
;
226
:
191
197
[PubMed]
15.
Abràmoff MD, Folk JC, Han DP, et al. Automated analysis of retinal images for detection of referable diabetic retinopathy. JAMA Ophthalmol 2013;131:351–335
16.
Silva
PS
,
Cavallerano
JD
,
Sun
JK
,
Soliman
AZ
,
Aiello
LM
,
Aiello
LP
.
Peripheral lesions identified by mydriatic ultrawide field imaging: distribution and potential impact on diabetic retinopathy severity
.
Ophthalmology
2013
;
120
:
2587
2595
[PubMed]
17.
Centers for Medicare & Medicaid Services. Medicare program; payment policies under the physician fee schedule and other revisions to part B for CY 2011 [article online], 2011. Available from http://www.regulations.gov/#!documentDetail:D=CMS-2010-0205-1982. Accessed 21 January 2015

Supplementary data