6 Disturbing Things About the Facebook “Feelings” Study

By now, you’ve probably heard of the infamous Facebook “feelings study,” in which researchers manipulated the feeds of site users to determine if the content displayed affected their emotions. In a study involving almost 700,000 people over the month of January 2012, the researchers used a text-scanning algorithm to cater feeds, providing some with “neutral” feeds containing an even mix of posts, some with “negative” feeds featuring posts with negative commentary, and others with “positive” feeds, containing more positive posts. They determined that those seeing neutral material tended to post less overall, while those seeing negative feeds posted more negative things, and those seeing positive feeds increased the positivity of their postings.

When news on the study broke in June, not everyone was happy about it, including privacy advocates, the online scientific community, ethicists, and more. So…what exactly are the problems with the Facebook study?

1. Informed Consent

In order to perform scientific research involving human beings, research institutions and labs receiving federal funding have to obtain what’s known as “informed consent.” Participants need to be told about the potential risks and benefits of a study, offered a chance to opt out and given information about what kind of help will be available during the study, should they need it. There are cases in which informed consent could skew the results of a study, in which case such requirements may be waived or a team may come up with a very carefully structured informed consent form, but participants do, in fact, need to be informed.

When news about the study broke, it initially appeared to be subject to these guidelines, due to erroneous reporting that it had received federal funding. This is not actually the case — it was based entirely on data from within Facebook, generated using Facebook funds. However, both Cornell and UCSF were involved in the study. Given that they routinely receive public funds, they might have preferred to err on the side of caution and require informed consent, but they didn’t.

The researchers claim that participants agreed to become research subjects when they signed up, courtesy of a clause in the Facebook terms of service. This would not meet the standards of informed consent by most definitions, and besides, there was one small problem: the clause in question wasn’t added until four months after the study took place.

2. IRB Approval

You may have heard the term “IRB” thrown around, too. Institutional Review Boards act as ethics committees who are required to review proposals for publicly funded research involving human subjects. Their job is to take a look at the proposed study and methodology. They have to determine if the study offers a clear benefit, whether it poses a risk to participants, and whether the designers have taken reasonable steps to reduce risk. In studies like this one where revealing the full nature of the study to participants could have biased the results, an IRB could have discussed ways of obtaining informed consent and protecting participants.

In this case, the study was initially reported as having been approved by an IRB, then none at all, then an “internal ethics board” at Facebook. While it might not have needed formal IRB approval, it would have been a good idea.

3. The Study Might Have Harmed People

Researchers tend to get particularly concerned when they’re working with what are known as “vulnerable populations,” including children, disabled people, prisoners and other marginalized groups, depending on the context. It’s unclear whether users in these groups were involved in the study, as the methodology doesn’t discuss this, and this is a cause for concern. Such groups require special consideration because of a history of poor research ethics, and because of increased risks.

For example, depressed teens seeing negative feeds could experience more circumstantial depression, which could exacerbate their mental illness. They might simply have a poor week, but they could also experience setbacks in recovery or more serious effects.

4. Did the Study Even Measure What They Say It Did?

In order to preserve confidentiality, the researchers note, they didn’t actually look at the updates they were manipulating. Sounds good, but, wait: the text algorithm they used wasn’t designed for this kind of work, nor could it account for the nuances of casual communication. Think about these phrases: “I’m not having a great day.” “I’m not not having a party, I just want to keep it small.” “Good grief, that man!” In the first case, the algorithm would return a neutral result — -1 for “not” and +1 for “great”, even though the comment is actually negative. In the second, that double negative would overwrite the fact that having a party is usually a positive thing. “Good” in the third might make it sound positive — but the poster is probably mad at her boss, which isn’t so positive.

Because the researchers didn’t look at the data or control it well, they don’t actually know if they measured what they thought it did. Some critics have described the outcome of the research as a “statistical blip,” arguing that the results aren’t meaningful enough to draw any strong conclusions.

5. How Are Corporations Using Data About You?

As you might have gathered from the discussion of informed consent and IRBs above, if research isn’t receiving public funding, there’s a lot more ethical leeway. Many researchers still follow basic ethics, because it’s become an important part of the scientific community after considerable public pressure to reform the way research is conducted.

Private companies like Facebook, however, profit from the data you contribute to them, and they have a vested interest in researching it very heavily on an internal level. When that research favors Facebook — in this case, the “results” suggesting that manipulation of feeds affects user feelings — the company can leverage this to get more money from advertisers. In that setting, ethics is less important than getting results, and corporations love partnering with outside researchers and pushing for publication to legitimize their results.

6. This Study Isn’t Repeatable

One of the core principles of science is that you test, write it up, and test again. While not every study is repeated, because that would be wasteful, studies do build on each other, and researchers want to make sure that they can repeat research if they need to. That’s why researchers provide detailed discussions of their methodology in their work. However, because of the black box nature of the data handling, the researchers can’t repeat this study. In fact, no one can. That’s bad science.

What this study, and the ensuing public outcry, illustrate is that it’s critical to reform approaches to research involving human subjects in the United States. IRB or third party ethics board approval and informed consent need to apply to all research, not just that funded by the government. Until then…people have no idea which corporations are experimenting on them, when, and how.

Photo credit: Dimitris Kalogeropoylos.


Catrin K.
Catrin Schuetz3 years ago

Not on Facebook .

Alan Lambert
Alan Lambert3 years ago

I do not have a Facebook account.

Even though about 20 of my friends and family have tried to convince me to sign up, because of an issue I had in the early days of the company, I do not intend to until the absolute last possible moment.

Natasha Salgado
Past Member 3 years ago

I suggest getting off fb--you know you can cancel your membership anytime you want.

'Great White' Earth-Being
'Great White' 3 years ago

The people of the world need to stop just trusting, without critically eximaning, and thinking that Internet base organizations are not really bad, simply because the organizations are on The Internet!!!!!

I mean what does Facebook have to do, to get people to stop using it?!?!?!?!

June Jarka
June J3 years ago

The people conducting this research need to be legally compelled to explain WHY they want certain kinds of data from targeted groups in society and EXACTLY PURPOSE AND WHAT FOR AND HOW THE RESULTS OF THE RESEARCH ARE GOING TO BE USED.
Also, what are the parameters of their research? What is/are their goals? What outcome are they being asked to produce? These are the kinds of questions which need to be answered and aren't by the people who prepare surveys from corporations. Why is that the case? Why are the corporations keeping masses of data secret? Why not make their raw research results public (i.e;) publish them on Facebook or whatever publically accessible websites they are obtaining the information from. The people giving the information requested have a moral and a legal right to know how the information is being used or going to be used for or against them.

Fi T.
Past Member 3 years ago

Be responsible for what's been written

Shalvah Landy
Past Member 3 years ago

I don't know if "WhatsApp" is big in the states, here it's nuts. Much worse than facebook.
I looked up the spelling on wikipedia because the way it's pronounced sounds like what's up....

Shalvah Landy
Past Member 3 years ago

I don't and won't have facebook!
It would also be nice if care2 removed the "Share my comment on Facebook" or at least remove the v like the other 2, it's bothersome remembering to unclick it!

Nikolas K.
Nikolas K3 years ago

George Orwell's predictions are becoming truer each day and the Sheeples continue to graze in the pastures of Facebook,Google etc giving them more of themselves and not look up and see the tags in their ears.