THE BLOG
04/04/2013 10:02 pm ET Updated Dec 06, 2017

Can We Fix the Problem of Publication Bias?

Click here to read an original op-ed from the TED speaker who inspired this post and watch the TEDTalk below.

Ben Goldacre's lucid and impassioned analysis of selective publication in clinical drug trials concludes by declaring that publication bias "is not a difficult problem to fix... We need to force people to publish all trials conducted in humans... for all drugs in current use."

I was in near full agreement with Goldacre's analysis until he asserted that the fix was "not difficult." He based his optimism on what he saw as recent new developments in understanding publication bias -- development that will allow the scientific community to control the problem through rational and fair-minded action.

But here's the rub. The problem of publication bias was well understood more than 40 years ago. And many cures of that, proposed and tried long ago, have not fixed it. What Goldacre does not take into account is the many ways in which, like most other humans, even the most rational scientists operate outside the constraints of their rational analysis. (Mahzarin Banaji and I have recently presented the extensive support for this view in our recent book, Blindspot: Hidden Biases of Good People.)

Goldacre actually provided, in his talk, several key indications of scientists deviating from the dictates of operating rationally. Ask researchers whether it is wise to register clinical trials in advance of conducting them. As Goldacre pointed out, there is general agreement on the wisdom and even regulations requiring this in clinical drug trials. However, when such registries were made available, many clinical researchers "did not bother to use those registers." Ask journal editors whether they should publish submitted reports of trials unregistered trials. As Goldacre pointed out, the response will be general agreement. Yet, the ICMJE editors who publicly declared their intent to publish only advance-registered trials nevertheless published many trials that had not been advance-registered. Ask researchers whether it is wise to publish all clinical trials regardless of their outcomes -- again, the response will be general agreement. Yet researchers whose results are "unflattering" to the tested drug continue to assume that readers and editors alike will be uninterested in those reports, as a consequence of which they don't bother to submit them for publication, even though the journal Trials declared a policy of publishing all trials, regardless of outcome.

I often heard in subsequent years that the remedies my 1976 remedies were "wise before their time" and could deal with publication bias problems that are once again coming to wide attention. -- Anthony G. Greenwald

My own recommendations for publication bias remedies, in 1976, had a similar fate. They were officially approved by the American Psychological Association (APA) and were widely supported by researchers who read about them when announced in an editorial in Journal of Personality and Social Psychology. But within four years APA decided to terminate my program of remedies for publication bias, persuaded by researchers who wished to publish in the journal that the remedies were (a) not needed and (b) were interfering with the publication process. I often heard in subsequent years that the remedies my 1976 remedies were "wise before their time" and could deal with publication bias problems that are once again coming to wide attention. But I know that my previously failed remedies will succeed no better now than they did in 1976.

Why is publication bias so remarkably difficult to fix? It's because all actors in the research-publication system have reasons (not necessarily commendable ones) for preferring the present flawed system. Researchers do not like the extra work of formulating and registering predictions in advance; they continue to resist publishing their unpredicted ("negative") results. (No matter that these will make them appear noble, the concern that it will make them look either foolish or incompetent dominates.) Likewise, editors don't want to publish failures to find predicted effects, which they will often dismiss as being inconclusive, unimportant, or both. And journal publishers, too, are unhappy with publishing "failures", which they know will not enhance their journal's "impact factor", which depends on scientists frequently citing the articles published in the journal.

Some good news: A possible solution is now under active development. The insight driving the new method is that statistical tests reported in any publication provide an evidence trail that can be analyzed to reveal publication bias. The currently leading pioneer of this approach is Uri Simonsohn, a multidisciplinary psychologist at University of Pennsylvania. Simonsohn calls the method "p curve analysis." The method requires only the assumption that, in publishing, researchers will not suppress their "statistically significant" results. The remaining challenge is to develop analysis methods that will most effectively read the trail left by published "significant" results.

A key virtue of the new approach: It does not depend on cooperative, collective action by authors, editors, and publishers. It can be carried out by small groups or even individual researchers -- functioning in effect as scientific posses or vigilantes. The new approach will vindicate some researchers (I'd love to be one of those) and embarrass others. Like all statistical methods it will be imperfect and will need to be used with care and caution. I expect that it will be far better in detecting untruth than the polygraph, even if it won't reach the confidence provided by competent uses of DNA evidence. Publication bias may have met its match.

Ideas are not set in stone. When exposed to thoughtful people, they morph and adapt into their most potent form. TEDWeekends will highlight some of today's most intriguing ideas and allow them to develop in real time through your voice! Tweet #TEDWeekends to share your perspective or email tedweekends@huffingtonpost.com to learn about future weekend's ideas to contribute as a writer.