Is the Placebo Effect Dangerous?

It's the result of a fundamental misunderstanding of placebo effects and control groups -- a misunderstanding that, scientists are now arguing, invalidates any claims of effectiveness for almost all psychological interventions.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Click here to read an original op-ed from the TED speaker who inspired this post and watch the TEDTalk below.

Physician and medical gadfly Ben Goldacre is well known for his relentless crusade to keep medical researchers and drug makers honest -- and improve healing in the process. His recent and popular TEDTalk focuses on a particular form of research misconduct that strikes at the core of all evidence-based treatment -- the failure to publish negative findings. This publication "bias" is not subtle or inadvertent in most cases; indeed the opposite. The deliberate non-reporting of results unfavorable to a drug's reputation is often motivated by greed, and can be lethal to patients.

As Goldacre and others have described elsewhere, other clinical research biases are less blatant and criminal, but they nevertheless undermine consumers' trust in science and clinical evidence. I'd like to discuss one of those less obvious biases here today -- this one from psychological science. It's the result of a fundamental misunderstanding of placebo effects and control groups -- a misunderstanding that, scientists are now arguing, invalidates any claims of effectiveness for almost all psychological interventions.

The scientists making this claim are Walter Boot, Cary Stothart and Cassie Stutts of Florida State University, and Daniel Simons of the University of Illinois. They describe their argument -- and supporting evidence -- in a paper to be published this year in the journal Perspectives on Psychological Science. Here's the gist of what they're saying:

Psychological interventions -- ranging from mental training to psychotherapy -- must be proven effective, and this is typically done by comparing people getting the intervention to others who are not, called controls. The purpose of the control group is to account for improvements that might have happened even without the treatment -- the well-known placebo effect. When drugs are the treatment in question, the control subjects receive a sugar pill that is identical to the drug being tested, so that the subjects cannot tell which group they are in. The subjects are said to be "blind" to their condition, and as a result they all have the same expectations for improvement. This design is considered the gold standard for clinical study: any differences in outcomes can rightly be attributed to the treatment itself.

Expectations are the key here, but according to these scientists, psychological interventions face a much more difficult challenge in properly accounting for placebo effects. Volunteers typically know which treatment they have received. If they are receiving psychotherapy for anxiety, for instance, they know they are being treated -- and probably expect to get better as a result. Therefore, comparing these experimental subjects to controls who receive no treatment is inadequate. Their different expectations would be expected to skew the results--and make any claims about cause-and-effect invalid.

So everything I have said so far is textbook psychological science, and I apologize if it's basic. But here's where it gets interesting. To deal with this problem of expectations, most scientists take another step. They create an "active" control group -- that is, a group of volunteers who receive a similar therapy, but not the one that specifically targets their anxiety. In this way, they believe, the design controls for placebo effects, allowing scientists to make claims about an intervention's effectiveness. And indeed, published papers routinely make such claims.

Not to put too fine a point on it, but this means that any conclusions about psychological interventions -- in mental health, education, cognitive improvement -- is suspect. -- Wray Herbert

But what if the active control group is also inadequate? That's what these scientists are claiming: Even active controls, they argue, do not adequately account for crucial differences in expectations. Their language is strong, so I'll quote directly: "This failure to control for the confounding effect of differential expectations is not a minor omission -- it is a fundamental design flaw that potentially undermines any causal inference." Not to put too fine a point on it, but this means that any conclusions about psychological interventions -- in mental health, education, cognitive improvement -- is suspect.

The scientists offer evidence to back up this sweeping indictment. They studied in detail the claim that action video game training enhances perceptual and cognitive abilities. They picked this kind of intervention not because it is a particularly egregious example of poor design, but because it is better than most. Even so, they say, it is not good enough. Here is a brief description of their case study:

Previous studies have shown that volunteers who train for 10 to 50 hours on a fast-paced, visually demanding action video game show improvement on measures of visual processing, attention and task-switching. The active controls is these studies usually played a slower paced, non-action game -- Tetris, for example -- for the same amount of time. The assumption is that this control condition is close enough to the intervention in question -- that all the volunteers have comparable expectations. But here's the problem: None of these studies has actually tested whether participants playing Tetris expect to improve, and if so how much, and on what skills.

So these scientists did measure expectations, in two surveys. Participants first watched a short video of either an action game -- Unreal Tournament -- or a control game like Tetris. Then they learned about the cognitive and perceptual tests used to measure performance in these studies. For each skill, they indicated whether they believed their performance would improve as a result of training on the game they had seen. If Tetris were an adequate placebo control, then participants should expect the same improvement on each measure as those viewing Unreal Tournament.

They did not, according to the survey. Those who viewed the fast-paced game had much greater expectations for improvement on cognitive and perceptual skills than did those who trained on Tetris. Or to say it another way, the Tetris subjects had no delusions that Tetris would hone the same skills as Unreal Tournament, although they did expect (correctly) to improve on mental rotation.

This goes far beyond video games, the scientists insist. The placebo problem is a "pernicious and pervasive" problem that calls into question all sorts of psychological interventions, from "brain fitness programs" to mental health interventions, including journaling and Internet-based psychotherapy.

Ideas are not set in stone. When exposed to thoughtful people, they morph and adapt into their most potent form. TEDWeekends will highlight some of today's most intriguing ideas and allow them to develop in real time through your voice! Tweet #TEDWeekends to share your perspective or email tedweekends@huffingtonpost.com to learn about future weekend's ideas to contribute as a writer.

Popular in the Community

Close

What's Hot