Huffpost Business

Featuring fresh takes and real-time analysis from HuffPost's signature lineup of contributors

Edward W. Felten Headshot

Facebook's Emotional Manipulation Study: When Ethical Worlds Collide

Posted: Updated:

The research community is buzzing about the ethics of Facebook's now-famous experiment in which it manipulated the emotional content of users' news feeds to see how that would affect users' activity on the site. (The paper, by Adam Kramer of Facebook, Jamie Guillory of UCSF, and Jeffrey Hancock of Cornell, appeared in Proceedings of the National Academy of Sciences.)

The main dispute seems to be between people such as James Grimmelmann and Zeynep Tufecki who see this as a clear violation of research ethics; versus people such as Tal Yarkoni who see it as consistent with ordinary practices for a big online company like Facebook.

One explanation for the controversy is the large gap between the ethical standards of industry practice, versus the research community's ethical standards for human subjects studies.

Industry practice allows pretty much any use of data within a company, and infers consent from a brief mention of "research" in a huge terms of use document whose existence is known to the user. (UPDATE (8:30pm EDT, June 30 2014): Kashmir Hill noticed that Facebook's terms of use did not say anything about "research" at the time the study was done, although they do today.) Users voluntarily give their data to Facebook and Facebook is free to design and operate its service any way it likes, unless it violates its privacy policy or terms of use.

The research community's ethics rules are much more stringent, and got that way because of terrible abuses in the past. They put the human subject at the center of the ethical equation, requiring specific, fully informed consent from the subject, and giving the subject the right to opt out of the study at any point without consequence. If there is any risk of harm to the subject, it is the subject and not the researchers who gets to decide whether the risks are justified by the potential benefits to human knowledge. If there is a close call as to whether a risk is real or worth worrying about, that call is for the subject to make.

Facebook's actions were justified according to the industry ethical standards, but they were clearly inconsistent with the research community's ethical standards. For example, there was no specific consent for participation in the study, no specific opt-out, and subjects were not informed of the potential harms they might suffer. The study set out to see if certain actions would make subjects unhappy, thereby creating a risk of making subjects unhappy, which is a risk of harm -- a risk that is real enough to justify informing subjects about it.

The lead author of the study, Adam Kramer, who works for Facebook, wrote a statement (on Facebook, naturally, but quoted in Kashmir Hill's article) explaining his justification of the study.

The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends' negativity might lead people to avoid visiting Facebook. We didn't clearly state our motivations in the paper.


The goal of all of our research at Facebook is to learn how to provide a better service. Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.

This misses the point of the objections. He justifies the research by saying that the authors' intention was to help users and improve Facebook products, and he expresses regret for not explaining that clearly enough in the paper. But the core of the objection to the research is that the researchers should not have been the ones deciding whether those benefits justified exposing subjects to the experiment's possible side-effects.

The gap between industry and research ethics frameworks won't disappear, and it will continue to cause trouble. There are at least two problems it might cause. First, it could drive a wedge between company researchers and the outside research community, where company researchers have trouble finding collaborators or publishing their work because they fail to meet research-community ethics standards. Second, it could lead to "IRB laundering," where academic researchers evade formal ethics-review processes by collaborating with corporate researchers who do experiments and collect data within a company where ethics review processes are looser.

Will this lead to a useful conversation about how to draw ethical lines that make sense across the full spectrum from research to practice? Maybe. It might also breathe some life into the discussion about the implications of this kind of manipulation outside of the research setting. Both are conversations worth having.

UPDATE (3 p.m. EDT, June 30, 2014): Cornell issued a carefully worded statement (compare to earlier press release) that makes this look like a case of "IRB laundering": Cornell researchers had "initial discussions" with Facebook, Facebook manipulated feeds and gathered the data, then Cornell helped analyze the data. No need for Cornell to worry about the ethics of the manipulation/collection phase, because "the research was done independently by Facebook."

Edward W. Felten is the Robert E. Kahn Professor of Computer Science and Public Affairs at Princeton University, and the founding Director of Princeton's Center for Information Technology Policy (CITP).

This piece first appeared on Freedom to Tinker, a blog hosted by CITP.