Research Funding: When Is the Money Dirty?

All research starts with biased funders and researchers -- because in the absence of such bias, it would be research no one would bother doing. I don't think anyone runs studies in the absence of hopes and preferences pertaining to the outcomes.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Perhaps the most riveting moment in the documentary Fed Up is a moment of silence. Katie Couric peppers Dr. David Allison of the University of Alabama with a series of less-than-friendly questions about his industry-funded research related to nutrition and obesity. The usually-articulate Dr. Allison fumfers his way through a few truncated responses, and then -- backed into a rhetorical corner by one of the world's most experienced interviewers -- pauses to think. The camera fixes on him in his silence, and then cuts away before he provides an answer.

The effect is obviously just as intended in a film that (for the most part, justifiably) implicates Big Food in our weight-related woes: The impression is conveyed that there is no good answer, and certainly Dr. Allison has none. But actually, neither of these is true.

As for Dr. Allison, whose credentials speak pretty eloquently for themselves and whose peer-reviewed publications run into the rarefied and enviable neighborhood of 500, he has addressed the issue very directly in some of those very publications. The case has been made by others that industry-funded research tends to bias the findings reported in favor of the funder. Dr. Allison, an expert in biostatistics among other things, has formally analyzed the methodologic details and reporting of comparable studies in comparable journals with differing funding sources and found no difference. He has also shown very important sources of bias and distortion in the scientific literature unrelated to funding, including the general tendency to hyperbolize, and to let the cart of prior conviction run ahead of the horse of hypothesis testing.

But Dr. Allison's various rebuttals notwithstanding, Ms. Couric was posing legitimate questions. Don't private funders want studies to produce a particular outcome? Doesn't that introduce bias?

The answers to both of these are unequivocally yes and yes. But what is easy to overlook is that the same is true of all funders, and all researchers. I am happy to use myself as an example.

The core funds for my research lab come from the CDC. Funding for a number of our studies over the years have come from various federal agencies, including the NIH, CDC, HRSA, AHRQ, and USDA -- to provide a representative list. We have also run studies funded by private foundations; some funded principally by no-strings-attached philanthropy; and yes, some funded by industry -- including pharmaceutical companies, nutriceutical companies and food companies.

The confession here is that I have been comparably biased every time. Bias simply implies an a priori preference -- the hope for, and perhaps expectation of, a particular outcome. So here's the thing: Why would any researcher waste time doing research (which is generally quite tedious and taxing) if he or she did NOT hope for a particular outcome? I always do.

So, too, do all funders. While the NIH does not generally manufacture and sell the interventions it studies, it certainly does care about the outcomes. NIH, too, must justify its existence, and budget -- just not to shareholders. NIH and all federal agencies are accountable to Congress, and by extension to us, in our tax-paying multitudes. NIH competes in the federal budget with other societal priorities (and, no doubt, pork-barrel boondoggles); and perhaps more intensely, the various institutes compete with one another for slices of the common pie. Too many negative study results tend to suggest that an institute is not spending money all that well and wisely -- and affect the outcome of that competition. Even NIH program officers are biased about study outcomes.

Perhaps industry funders are more biased, I'm not sure. But either way, the difference is one of degree, not kind. All research starts with biased funders and researchers -- because in the absence of such bias, it would be research no one would bother doing. I don't think anyone runs studies in the absence of hopes and preferences pertaining to the outcomes.

But this line of reasoning is, of course, anything but reassuring. Instead of refuting Katie Couric's apparent concerns, it suggests they pertain more universally. If all research starts with bias, can we trust any of it?

Yes, of course -- especially over time and in the aggregate, as time and daylight are greater cultivators of truth and the gradual accumulation of evidence pretty reliably tips toward the actual answer. But there is a case for getting past bias to trust even in the short term.

Since studies start with the biases of those conducting them, one of the crucial functions of good research methods is to defend against that bias influencing the outcomes. This is the very function of "double blinding," a method applied so that neither study participants nor researchers know who got what until after the data are analyzed. We may reasonably omit a weedy consideration of research methods here, but suffice to say there are good reasons for randomization and placebo-controls, too. Robust research methods defend quite well against biased outcomes.

There is still the opportunity for biased interpretation of those outcomes. There are several relevant defenses against this. The first and perhaps most important is peer review. All of the high-quality medical journals disseminate our submissions to an anonymous jury of our peers. Our publications, imperfect though they and the process may be, must run that gauntlet.

Second, we routinely beat up on one another's papers post-publication. I have certainly seen my own papers critiqued, and when done well I have only appreciated it, as part of the milling process that separates wheat from chaff. As a medical journalist, I consider critiquing the work of my colleagues an important part of my job.

For instance, the relevance of industry funding was quite clear to me in a recent study of diet soda and weight loss. The study methods were sound, but entry into the study was rigged in favor of the a priori bias: The study was limited to habitual diet soda drinkers. While there are good answers to Katie Couric's questions, they are good questions just the same -- and this study is one of the many to show why.

Finally, there is transparency. All researchers are generally obligated to report both funding sources, and any potential conflicts of interest. Here, too, daylight is the best disinfectant. But just as antibiotics kill not only bad germs but good, an undue focus on funding source can falsely castigate meaningful data. Carefully conducted, methodologically rigorous research funded by industry can lead to reliable conclusions. Distortions resulting from NIH-funded research can do the opposite at times, even with lethal ramifications, as colleagues and I addressed in a recent analysis of hormone replacement therapies.

Money circulates, passing through many hands. Hands are not reliably washed as often as they should be. Maybe all money is dirty to one degree or another, just as all research is biased.

On the other hand, we should consider what avoidance of industry-funded research would yield. I have noted how money, and want of a patent, mitigated against the recognition of coenzyme Q-10's utility in congestive heart failure for more than a decade. Avoidance of pharmaceutical industry funding would have done nothing to speed up this process; it would merely have brought the studies of carvedilol down to the same snail's pace. Lack of industry-funded research means less research, and slower progress. That's not the prize we are after.

We just can't be naïve. The same drug companies that develop products and fund research that saves lives also have a history of veiling the data they don't want us to see, and selling drugs long after they should stop. Caveat emptor clearly pertains.

We may hope to get to reliable truths in spite of such obstacles if we impose the proper set of high standards. Here's my short list:

1) funding sources and conflicts of interest (real or potential) should be reported and entirely transparent

2) study methods should be robust, which often but not always means: randomized, double-blind, and placebo controlled

3) publications should be peer-reviewed

4) the scientific community should, as it does, critique one another's work post publication, with the public looking on

5) we should rely preferentially on the overall weight of cumulative evidence -- ideally derived from diverse labs, diverse funding sources, and diverse methods -- rather than any given study, on any particular topic

I should note that Dr. Allison and I do draw the line in different places. He has funding from some companies I consider particularly implicated in the obesity epidemic, which I would be unwilling to accept. But just where to draw a line on any continuum is challenging, and subjective. We both agree that the lines should be on public display.

After watching Fed Up and wincing at Dr. Allison's interview, I asked him about it. He told me he spoke to Ms. Couric for the better part of 90 minutes, so those several seconds of silence were a small fragment of the discussion they had. Ms. Couric was posing legitimate questions, but there are better answers than a moment of silence.

-fin

Dr. David L. Katz is the founding director of Yale University's Prevention Research Center at Griffin Hospital in Derby, CT. He has been conducting and publishing clinical research for roughly 20 years. He has co-authored several textbooks on research methods.

Popular in the Community

Close

HuffPost Shopping’s Best Finds

MORE IN LIFE