The risk of hypoglycemia is a major barrier to intensifying medical therapy in patients with diabetes who use insulin secretagogues or exogenous insulin.1–3 By promoting the transport of blood glucose into insulin-sensitive tissues, these medications can cause hypoglycemia if their action is not counterbalanced by carbohydrate intake. Conversely, excessive carbohydrate intake can override the effect of hypoglycemic agents, resulting in postprandial hyperglycemia.
Daily patterns of glycemic control are profoundly influenced by the interaction of diabetes medications with carbohydrate intake. Patients who do not recognize carbohydrate foods and those who cannot accurately estimate the number of carbohydrate grams in foods may vary their carbohydrate intake unknowingly. In addition, many patients are under the impression that eating a large meal rather than amount of carbohydrates consumed, is the greatest determinant of the magnitude of the postprandial blood glucose excursion. For this population of patients trying to regulate daily blood glucose levels, ignorance of carbohydrate intake exposes them to unpredictable, potentially dangerous glycemic variability. For clinicians trying to prescribe appropriate medication regimens, interpreting blood glucose patterns is problematic when patients cannot report (or cannot report correctly) the missing variable of carbohydrate intake.
Carbohydrate counting is a meal-planning method that can be used in diabetes self-management to estimate carbohydrate intake. Much of the published experience with this methodology arose from the practicalities of managing patients with type 1 diabetes who participated in the landmark Diabetes Control and Complications Trial.1,4 The American Diabetes Association has acknowledged that monitoring the pattern and amount of carbohydrate in patients' diets is key to achieving glycemic control.5 However, studies of patients with either type 1 or type 2 diabetes show that knowledge deficits regarding diet and hypoglycemia are substantial.6 Although monitoring and controlling carbohydrate intake are behaviors that may be influenced by factors other than knowledge, knowledge is a prerequisite for informed behavior, and for this reason, current diabetes standards of care include patient education about basic carbohydrate counting.2,3,7
To identify patients who are appropriate candidates for intensification of medication therapy, clinicians need to be able to assess patients' knowledge of carbohydrate counting. To improve efficiency and effectiveness of the carbohydrate-counting meal-planning approach, registered dietitians (RDs) and other diabetes educators need to be able to assess gaps in patients' understanding of carbohydrate counting. Currently, there is no validated instrument specifically focused on testing carbohydrate knowledge among adult patients with diabetes.
Study Objectives
The purpose of this study was to develop and validate a test of carbohydrate-counting knowledge that would be useful in typical clinic settings and would enable clinicians to determine whether their patients need carbohydrate education. A secondary aim was to create a tool to assist RDs and other diabetes educators in determining specific gaps in patients' knowledge. The interaction between specific hypoglycemic medications and carbohydrate intake was purposely left out of this instrument. Instead, the test focuses on knowledge of dietary carbohydrate and its role in raising blood glucose or preventing hypoglycemia.
Methods and Design
Development of the AdultCarbQuiz instrument
To develop a carbohydrate counting quiz, the researchers identified six domains of knowledge as important in teaching patients how to self-manage their carbohydrate intake. The domains included 1) recognition of carbohydrate in commonly eaten foods, 2) ability to count the carbohydrate content in typical portions of simple foods, 3) ability to interpret a nutrition label for carbohydrate content, 4) knowledge of glycemic targets, 5) knowledge about preventing and treating hypoglycemia using carbohydrate foods, and 6) ability to sum up the carbohydrate content of a meal. The researchers created quiz items for each of these domains, which were then field-tested by patients within the diabetes practice and by non-diabetologist physicians to ensure clarity and simplicity of format.
The final AdultCarbQuiz consisted of 43 items divided among the six domains as follows: carbohydrate food recognition (19 items), carbohydrate food content (6 items), nutrition label reading (4 items), glycemic targets (4 items), hypoglycemia prevention and treatment (5 items), and carbohydrate content of meals (4 items). Because some patients have been taught to count carbohydrate content as grams and others use the common 15-g carbohydrate servings (sometimes called “carb choices”), the domain of carbohydrate food content (items 20–26 on the quiz) was offered in both formats, and participants were instructed to complete one or the other.
The quiz is paper-based and includes true-or-false and multiple choice questions. Food items were chosen to reflect common patient knowledge gaps seen in practice. For example, we have found that patients often misidentify high-calorie, non-carbohydrate foods (such as butter, sausage, cheese) as containing carbohydrates but incorrectly identify “healthy” foods (such as fruit) as not having any carbohydrates.
The choice of “Don't Know” was included for all items. Although guessing can be accounted for in scoring and validating knowledge tests designed for group comparisons, the AdultCarbQuiz was designed for individual patient assessment. Therefore, identifying specific uncertainties in specific domains of carbohydrate counting was deemed important for the test's utility. In addition to the item answers, a choice of “Never Eat” was added to items with food examples to survey for food-item relevance to the target patient population of Veteran's Administration patients.
Study design
This was a cross-sectional study. The AdultCarbQuiz was self-administered at a single session by patient-participants and RD-participants after instructions by study personnel. For items containing food examples, participants were instructed to first answer the question and then circle the “Never Eat” indicator only if true. The test was gathered by study staff immediately after completion. A questionnaire was completed by patient-participants that included demographic information, highest level of education, type of diabetes, list of diabetes medications, and most recent A1C values. Permission was obtained to review the participants' medical charts from the Louis Stokes Cleveland Department of Veterans Affairs Medical Center (LSCDVAMC) to verify A1C results, diabetes medications, and attendance at nutrition education sessions within the past 3 years. The study was approved by the LSCDVAMC institutional review board.
Study setting
The study was conducted in the outpatient clinics of the LSCDVAMC in Wade Park and Brecksville, Ohio, facilities. LSDVAMC had an active enrollment of ∼20,000 patients with diabetes at the time of the study. Of these, ∼5,000 were actively seen as outpatients in the Wade Park and Brecksville facilities. Fewer than 5% of the patients had type 1 diabetes.
Participant recruitment and selection
Patient-participants were recruited from primary care clinics, specialty diabetes clinics, and diabetes education classes. The inclusion criterion was having diagnosed type 1 or type 2 diabetes. Exclusion criteria were inability to read or understand English or cognitive deficits severe enough to hinder the ability to self-administer the written instrument. RD-participants were recruited from the LSCDVAMC. Most had experience in teaching carbohydrate counting to patients with diabetes but did not work with diabetes patients exclusively.
Statistical analysis
Test answers were transcribed from the paper forms to spreadsheets. All answers were scored as correct or incorrect. Missing answers or answers of “Don't Know” were scored as incorrect. The survey of “Never Eat” was likewise transcribed. Only food items with a positive indication of “Never Eat” were counted as such.
Reliability of the AdultCarbQuiz was assessed by split-half reliability, using Pearson's correlation coefficient to correlate the scores for odd-numbered items to even-numbered items. The Spearman-Brown prediction formula was used to correct the reliability coefficient for the full, 43-item quiz. Internal consistency within each knowledge domain and for the whole instrument was assessed with the Kuder-Richardson 20 formula. Test-retest validity was not deemed appropriate for assessing validity because the instrument is a test of knowledge, and new knowledge could easily be gained by review of the test answers. To encourage participation, study personnel frequently offered to review test items and give corrected answers after completion of the quiz.
Construct validity was tested in three ways. First, the Kruskal-Wallis test was used to compare quiz scores among a group with expected extremely high knowledge (RDs) to a group with expected lower knowledge (patients). Second, the Kruskal-Wallis test was used to compare the quiz scores of a participant group with expected higher knowledge (those who had had at least one Veterans Administration nutrition education session within the previous 3 years) to a group without such nutrition education. Third, quiz scores were compared to a score of overall knowledge used by expert raters. A convenience subset of 54 patient-participants, referred by their clinical providers for individual sessions to learn carbohydrate counting, were rated by one of three RD certified diabetes educators (CDEs) in practice at the VA. Before beginning the educational session, participants self-administered the quiz, and the RD CDEs subsequently collected quizzes in a blinded manner.
The RD CDEs then used a 5-point Likert scale with 0 representing the lowest knowledge and 5 representing the highest knowledge to rate each participant's knowledge before the educational session. Five Likert scales were completed for each participant: 1) recognition of carbohydrate-containing foods, 2) carbohydrate counting for individual foods or meals typically eaten by the participant, 3) interpretation of a nutrition label for carbohydrate content, 4) insight into target blood glucose values before and after eating, and 5) prevention and treatment of hypoglycemia. The Likert scores were summed and correlated with the total AdultCarbQuiz score using Pearson's correlation coefficient.
Content validity was tested by linear regression of the total quiz score based on the most recent A1C value for each participant among those participants taking insulin and/or insulin secretagogues. The rationale for limiting this test of validity to patients using these agents is that glycemic control among patients who spontaneously produce endogenous insulin may not be affected by dietary carbohydrate intake, especially if insulin-sensitizing agents are also used. Thus, knowledge of carbohydrate counting and attendant dietary intake of carbohydrates may not influence A1C values among such patients.
The mean quiz score and variance were unknown at the start of this observational study. Researchers aimed for a minimum sample size of 120 test-takers to yield the best precision for the population mean score, assuming a Student's t distribution of scores among our sample.
SAS statistical software, V9.1 (Cary, N.C.) was used for analyses. Statistical significance was defined as P < 0.05, two-tailed.
Study Results
The final 43-item quiz, with formatting and correct answers, is shown in Figure 1. Most participants completed the quiz in < 15 minutes.
Participant characteristics
Demographic characteristics and glycemic control methods of the 132 patient-participants are in Table 1. The patient-participant sample was predominantly older (mean age 60 years), white (59%), and male (98% male), reflecting the demographics of the U.S. VA health care system population. Glycemic control was widely variable, with a mean A1C of 8.3% and standard deviation of 2.1%. Ninety percent had graduated from high school, and 59% had at least one VA nutrition education session within the past 3 years. Seventy-nine percent were taking insulin and/or an insulin secretagogue, whereas 21% were taking an insulin sensitizer only or no diabetes medication.
Food-item commonality
Of the 26 food items tested for carbohydrate recognition or counting, two food items (maple syrup and blackberry jam) were rated as “Never Eat” by 10 and 11%, respectively, of respondents (Table 2). All other food items were rated as “Never Eat” by < 10% of respondents.
Instrument reliability
The split-half reliability coefficient, comparing odd-numbered items to even-numbered items, was 0.92 with the Spearman-Brown prediction formula correction (Table 3). The Kuder-Richardson 20 coefficient for the entire 43-item AdultCarbQuiz was 0.90, whereas the Kuder-Richardson 20 coefficients for each of the six domains of knowledge ranged from a low of 0.75 for the domain of “Counting carbohydrate in a meal” to a high of 0.88 for the domains of “Recognition of carbohydrate foods” and “Interpreting a nutrition label for carbohydrate content.”
Instrument validity
The distribution of total AdultCarbQuiz scores for the 132 patient-participants and the RD-participants is shown in Figure 2. The mean score for all patient-participants was 23.9 (SD 8.3) of a maximum possible score of 43, whereas the mean score for the 15 RD-participants was 41.4 (SD 1.5), a difference that was highly significant (P < 0.0001). The patient-participants who had nutrition education within the past 3 years scored higher (mean score 24.6, SD 7.9) on average than patient-participants without nutrition education (mean score 21.6, SD 8.0) (P < 0.05).
Patient-participants' knowledge of carbohydrate counting, as rated by one of three RD CDE experts, correlated significantly with total AdultCarbQuiz scores (Pearson's r coefficient 0.65, P < 0.0001).
Among patient-participants using insulin, the most recent A1C value was inversely related to the total AdultCarbQuiz score (adjusted r2 0.25, β –0.12, P < 0.0001) (Figure 3). If patient-participants using insulin secretagogues were also included, the association remained significant and inverse (β –0.10, P = 0.0005), but the association was weakened (adjusted r2 0.11).
Discussion
This study demonstrates that a relatively short (43-item) paper-based test of knowledge of carbohydrate counting (the AdultCarbQuiz) has good reliability and validity to assess carbohydrate counting knowledge among older male adults with diabetes. We believe this is the first such instrument specifically designed to test carbohydrate counting knowledge among adults with diabetes, although other instruments measuring general knowledge of diabetes self-management have been validated,8 and a test of carbohydrate knowledge focused on children with type 1 diabetes was recently validated.9
In this study, reliability of the AdultCarbQuiz was demonstrated by excellent split-half reliability and good to excellent internal consistency within the six knowledge domains judged to be important a priori by a team of diabetes educators and specialists. Therefore, respondents' answers appear to have an acceptably small degree of random variation across test items of similar content.
Obviously, it is impossible to test knowledge for the entire universe of foods eaten by a population of patients with diabetes. A representative subset of foods must be chosen for inclusion among the test items. The results of this study show that the food items chosen for the AdultCarbQuiz have an extremely high rate of commonality among the target patient population of mostly older, non-minority men; only two of 26 food items were rated as “Never Eat” by ≥ 10% of patient-participants.
The validity of an instrument depends heavily on the use for which it is designed. We designed the AdultCarbQuiz to inform clinicians of meaningful gaps in patients' knowledge of carbohydrate counting. For example, a patient who assumes that cheese contains carbohydrate (because milk contains carbohydrate) would not only be incorrect, but also could experience harm by incorrectly including cheese as a source of carbohydrate in the meal plan. Such knowledge gaps need to be remedied before committing a patient to an intensified prandial insulin regimen. The AdultCarbQuiz is a tool that can help health care providers who are not experts in diabetes medical nutrition therapy rapidly assess patients' carbohydrate-counting knowledge.
We infer that the AdultCarbQuiz score has clinical validity in diabetes self-management from our finding that respondent groups expected to have differences in knowledge of carbohydrate counting scored divergently. We found not only a wide difference in group scores between patient-participants and RD-participants (an example of comparing extreme groups), but also a significant difference in scores between patient-participants who had had recent nutrition education and those who did not. Thus, the scores for the AdultCarbQuiz appear to reflect patients' degree of knowledge of carbohydrate counting and fulfill our expectation that teaching patients about carbohydrate counting in diabetes results in knowledge gained.
Our finding of a significant positive correlation between assessment of carbohydrate knowledge by expert RD CDEs and total quiz score also supports the inference that the AdultCarbQuiz score measures carbohydrate-counting knowledge. Assessment of carbohydrate-counting knowledge by an RD CDE trained and experienced in teaching carbohydrate counting is the current gold standard for assessing patients' knowledge.
Finally, our finding of an association between AdultCarbQuiz score and recent glycemic control as represented by A1C values highlights the clinical importance of dietary self-management in metabolic control of diabetes and is consistent with other studies.7,10,11 Our study results provide further support for the validity of the AdultCarbQuiz as a measure not only of knowledge of carbohydrate counting, but also of effective self-management behavior.
This study did not test the ability of the AdultCarbQuiz to show knowledge gained by an individual patient after a nutrition education program. Therefore, we are unable to infer that change scores on the AdultCarbQuiz have validity in assessing the quality of education across time. Our conclusions are also limited to the patient sample and may not generalize to younger or female patients. However, we were able to include in our study sample patients with widely varying knowledge of diabetes self-management, ranging from those not yet requiring any medications to those using insulin pumps. We also included patients with either type 1 or type 2 diabetes. Therefore, the AdultCarbQuiz may be suitable for assessing carbohydrate knowledge among older patients at varying stages of type 2 diabetes and varying degrees of diabetes education, as found in many primary care practices.
Some studies have questioned whether carbohydrate counting is relevant to the management of patients with type 2 diabetes because insulin doses can be successfully titrated using preprandial blood glucose values.11 However, as noted by Davis and Wylie-Rosett, incorporating carbohydrate counting into self-management may offer the additional benefits of less weight gain and fewer hypoglycemic events.12
In conclusion, the 43-item, paper-based AdultCarbQuiz is a novel tool for assessing patients' knowledge of carbohydrate counting for diabetes management in < 15 minutes. It is suitable for use in office settings or patient education programs. By increasing awareness of knowledge gaps, the quiz may help providers and patients make better decisions regarding intensification of medical therapy, as well as the need to refer patients for medical nutrition therapy or nutrition education.
Article Information
The authors acknowledge the help of master's level graduate students in the Nutrition Department of Case Western Reserve University (CWRU) in Cleveland, Ohio: Jamie Day, Tiffany Fobes, Erin Camp, and medical student Amy Ricke. The authors also thank Dr. Allison Steiber of the CWRU Department of Nutrition and Dr. Anne Raguso, education director and dietetic internship director for the Louis Stokes Cleveland VA and an assistant professor in the Nutrition Department of CWRU, for their support of the research project. They also thank Debbie Hayes, RD, CDE, of University Hospitals Case Medical Center and Mary Beth Skala, RD, CDE, and Annette Petersen, RN, RD, CDE, from the Louis Stokes Cleaveland VA. Dr. Kern was supported for this work, in part, by the National VA Quality Scholars Fellowship Program. The work was also supported by an Emerging Technologies grant from the Veterans Integrated Service Network 10. In addition, all authors gratefully acknowledge the support of the administration of the LSCDVAMC.