Ever since word got out that Facebook had briefly manipulated some users' News Feeds to see how their feelings changed, a number of questions have popped up: just why did the company feel compelled to experiment in the first place? How noticeable was it? And was it worth the effort? As of today, we have some answers to those riddles. Study co-author Adam Kramer explains that Facebook was worried people would stop visiting the social network if they saw too many emotional updates -- a lot of negative posts could scare some people off, while a surge of positive vibes would leave others feeling left out. That's not what happened, however, and Kramer stresses that the company "never [meant] to upset anyone."
He also suggests that Facebook won't repeat history any time soon. The results of the circa-2012 field test may not have justified the "anxiety" that followed two years later, he says. Also, Facebook has been refining its "internal review practices" ever since, and it's taking the public's current response into account. Kramer doesn't say whether or not similar experiments will take place again, but it's clear that the company will be treading more carefully if it does. As it stands, there was only just enough of a change to suggest that the altered News Feeds had an effect.
While those are reassuring tidbits, they aren't going to satisfy everyone. There are calls for Facebook to meet scientific ethics standards when conducting research like this; critics argue that it should at least tell users they were part of an experiment, and ideally offer a chance to opt out ahead of time. Facebook assumed that it had permission because of a basic "research" clause in its Data Use Policy, but that's supposed to be used for product improvement, not academic papers. While the project did little if any harm, there's a worry that this lack of explicit consent could cause some real psychological damage should future tests get any more aggressive.
[Image credit: AP Photo/Jeff Chiu]