A recent editorial regarding diabetes education discussed the difficulty of doing quality research in this area (1). Behavioral research is inherently difficult (behavioral researchers like to call it “hard science” as opposed to the “easy science” of other fields). However, I will not discuss methods per se, focusing instead on the role of theory. Below I address the ways that theory can inform research, and I offer some thoughts about the role of extra-scientific considerations in research. What I say is particularly addressed to behavioral researchers, but much of it is relevant to other researchers as well.
Theory testing.
The role of theory in behavioral diabetes research is poorly understood. Many see theory as a grandiose conceptual scheme with little relevance to research. But it has been noted that there is nothing so practical as a good theory. The problem is with how theory is used. For example, consider the use of theory in recent research on behavioral interventions in diabetes. A recent study tested a behavioral intervention based on the Transtheoretical Model (TTM), currently a popular theory, against standard care (i.e., standard care against standard care plus a behavioral intervention) (2). To the surprise of no one familiar with behavioral interventions, adding the behavioral intervention produced better outcomes. But did this “prove” TTM, or did it simply demonstrate that behavioral interventions work? And when Anderson et al. (3) found benefit from a behavioral intervention based on empowerment concepts, did this “prove” a theory, or only that another behavioral intervention worked? Clearly these are not tests of a theory; they merely demonstrate that, behaviorally speaking, something is better than nothing. This research approach “assumes” the theory by building it into the intervention. Similarly, I have seen several unpublished studies that have proposed elaborate formulations of “stages of change” that were reducible empirically to whether someone was performing a behavior or not. The concept of “stages” added nothing to explanatory power, yet the results were reported as confirmation of a stage-based theory.
How might one really test such theories? Theory can be definitively tested only by identifying the principle underlying the theory and conducting an experiment (i.e., a clinical trial) that manipulates that principle. Aside from its humanitarian principles (i.e., people have the right to choose what sorts of medical/behavioral interventions they will receive), the empowerment approach suggests that people will be more likely to benefit from interventions that they want. That is, people will be more likely to change behaviors that they want to change than ones that they do not want to change, so behavioral interventions should focus on the former behaviors. Proponents of this approach might suggest that it be tested by randomizing people to receive either interventions for what they want to change or interventions that have been selected according to the priorities of health care providers.
But behaviorally oriented readers may be thinking, “Wait, that is often the point of studies that purport to test TTM” (simply replace “want to change” above with “ready to change”). Such studies regularly find that people who are ready to change a behavior show more change than those not ready to change (or that people who start at a later stage are more likely to be at a later stage at a subsequent point in time than those who started at an earlier stage). And there is the rub. The principle being tested is not unique to either empowerment or TTM; it has long been a principle of behavioral theory (4) and psychological interventions (5). What, then, is unique about the empowerment view in terms of change in health behavior? One could say (and I hasten to state that I am not speaking for advocates of the empowerment approach here) that the question is whether more change in a patient-chosen domain that is defined as less important by a health care professional (the empowerment perspective) is more beneficial for physical (or mental) health than less change in a domain defined as more important by the health care professional (the traditional perspective). In other words, is there a trade-off in terms of the amount of change relative to the importance of change, and, if so, which factor weighs more heavily? One could also test for order effects: Is it better to work with the patients’ goals first and then move to the health care professionals’ goals (an approach informed by the empowerment perspective), or vice versa (the traditional approach)? Presumably, the former approach would have the benefit of establishing trust in the patient-provider relationship and might lead to change even when the patients did not want it for themselves—a use of referent power—whereas the latter approach, which does not rely on relationship development, would be based on expert power (6).
If the above could be considered a test of empowerment predictions, readers may ask, what would be a test of TTM, since readiness to change has already been accounted for? One principle that undergirds TTM is that it is more effective and/or efficient to provide stage-specific interventions than comprehensive interventions. That is, rather than giving an intervention that includes elements appropriate for all stages, one identifies the current stage, provides an intervention customized to that stage, then observes whether the recipient has moved to the next stage and applies the intervention specific to that stage, repeating the cycle until the recipient has reached the final or target stage. Clearly, the staged intervention model is more labor-intensive in terms of assessment, but is this greater cost offset by reductions in unnecessary/useless treatment (improved efficiency) and/or by improvements in treatment outcomes (increased effectiveness)? Answering these questions requires a sophisticated research design, not simply a comparison of staged and unstaged interventions; a more technical description of the required methodology is presented elsewhere (7,8). And, we should note, the answer to the question will likely depend on the nature of the problem to be addressed and the type of intervention to be used.
Theory testing requires a shift in orientation from a focus on intervention techniques per se to a focus on theoretical principles. From a theory testing perspective, intervention techniques are simply ways to manipulate a hypothesized causal factor, e.g., changing beliefs will change behavior, so we must employ a technique for changing beliefs to test whether this produces behavior change. Although several different techniques might be used to change behavior, they may all involve the identical theoretical principle linking beliefs and behavior. An intervention is a technique like other research techniques (experimental designs, measurement tools, etc.), and differences in techniques do not in themselves operationalize different theoretical principles. Thus, we must avoid intervention techniques that use everything but the proverbial kitchen sink in favor of carefully chosen variations on a very limited number of principles, preferably in a factorial design representing all possible combinations of the relevant variations.
Causal modeling and statistical analysis.
Using theory is not necessarily limited to formal theory testing. Theory also can consist of using study-specific conceptual models to inform and guide the analysis. For example, when a behavioral intervention produces an effect, we should try to determine why (9). The Diabetes Control and Complications Trial (DCCT) demonstrated that intensive treatment (a behavioral strategy) was associated with reduced complications (10). But the real hypothesis was that improved glycemia was the mechanism by which this effect was produced, so the researchers did an analysis to determine whether improved glycemic control (an effect of intensive treatment) was associated with reduced risk for complications.
Although randomized controlled trials (RCT) are the gold standard for testing causal hypotheses, it is also possible to perform preliminary tests without RCT. For example, if we want to know how a behavioral intervention produces changes in glycemic control, we can conduct analyses to identify mediators of the observed outcomes, using the following question: Are improvements in glycemia a result of healthier eating, more exercise, or better medication self-management? Results of analyses of this outcome as well as other health outcomes have been published (11,12), and the statistical techniques for answering this type of question have been discussed elsewhere (13). Other related questions include: Are improvements in factors such as self-efficacy, knowledge, intentions, social support, and psychological functioning important mechanisms for producing behavioral change? At present, we know that most interventions work, but we do not know the keys to designing interventions that work, and we will not know these keys until we identify the critical mechanisms.
In general, researchers should formulate hypotheses regarding the linkages between cause and effect (e.g., intervention and outcome). When the relationship between two factors is hypothesized to be the result of a causal chain, researchers should measure the factors that represent these linkages, and they should conduct analyses to determine which are involved. It is not enough to say that the results might have been due to this or that factor; it is the researcher’s responsibility to determine how the data provide answers to the question of what linkages produced this relationship. Although the analytical techniques are more sophisticated than simple prepost group comparisons, they are well within the range of standard statistical applications.
Implications.
Assumptions about what constitutes “good” theory, measurement, or practice share much in common. A theory, measure, or technique becomes popular, and its adoption is driven by social processes that have little to do with the accumulation of scientific knowledge (14); a theory sounds good, a method seems impressive, a practice feels right. But we need to celebrate our own Columbus and Galileo, people who challenge our scientific beliefs. For example, I personally believe that we should provide culturally appropriate/competent/sensitive interventions, but do they actually produce better outcomes, i.e., answer a scientific question? We should not be afraid to ask the scientific question because we may prefer such interventions on humanitarian grounds (e.g., respect for human dignity), despite what science says. So, I would advocate testing a standard approach against one that is identical in intensity and topics and varies only by the cultural content, rather than stacking the deck against the standard approach by adding coverage and intensity to the culturally modified program, which biases the results in its favor. None of us should feel pressured to stretch our interpretations to justify humanitarian action, nor should we feel that we cannot question the scientific merit of a humanitarian proposal without becoming a professional outcast. More generally, as behavioral scientists we should recognize that, like all human beings, we want to find support for our cherished beliefs so they can be accepted and we can be praised/admired. We must be willing to question our own favorite theories, methods, and practices (including those in which we have a personal stake as an originator/developer) if we are to do the best science possible. In other words, we should take seriously the idea of trying to prove our own theories wrong, and only accept a theory when we cannot disprove it.
Like the earlier Diabetes Care editorial cited above, I believe that much has not been done. It should be clear, however, that I believe it can and should be done. These are some of the most important challenges facing behavioral diabetes researchers. We need to rise to the occasion if we are to maximize our contributions to scientific knowledge in this field and our impact on the health and well-being of people with diabetes and their significant others.
References
Address correspondence to Mark F. Peyrot, Loyola College, Beatty Hall, Room 311, 4501 N. Charles St., Baltimore, MD 21210. E-mail: [email protected].