Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Facebook is trying to explain how it defines nudity, violence and hate speech

Facebook updated its content policy so people can see how it decides what to take down or leave up.

The thumbs-up Facebook icon on the sign outside its headquarters on Willow Road
The thumbs-up Facebook icon on the sign outside its headquarters on Willow Road
Stephen Lam / Getty

Facebook is responsible for writing and enforcing content rules that all users have to adhere to — basically a code of conduct for what is and isn’t allowed on the service.

Writing these rules can be tricky. Some stuff is obviously inappropriate, like terrorist content or child pornography, for example. But other stuff is tougher to categorize and enforce across a global user base. What’s considered hate speech by one group of people is considered free speech by another.

It’s why deciding what’s allowed and what isn’t allowed makes Facebook CEO Mark Zuckerberg “fundamentally uncomfortable.”

Uncomfortable or not, it’s Facebook’s job. And on Tuesday, the company made an interesting move: It published the exact set of rules that Facebook employees and contractors use to decide what is allowed and what isn’t.

The idea, according to Facebook executives, is to give people a better understanding of why stuff is taken down so that there is less confusion (or anger or frustration) when some people disagree with whatever decision the company makes.

“This document mirrors the guidelines that are given to reviewers internally,” said Mary DeBree, Facebook’s head of content policy. “It is as much as possible that we can put out externally.”

Facebook is also rolling out a new appeals process so that anyone can appeal the removal of their post or photo.

Facebook’s global head of policy Monika Bickert explained that these new guidelines may look different — they are way longer and more detailed, for example — but that Facebook isn’t enforcing anything differently than it has in the past.

Here’s one example of how the new policy will look different for users. Facebook’s old Community Standards described a “direct threat” in four sentences:

We carefully review reports of threatening language to identify serious threats of harm to public and personal safety. We remove credible threats of physical harm to individuals. We also remove specific threats of theft, vandalism, or other financial harm.

We may consider things like a person’s public visibility or the likelihood of real world violence in determining whether a threat is credible.

The new set of standards outlining what Facebook considers a “threat” runs an entire page and a half.

Facebook’s role in policing content has become a big story over the past 18 months. After the 2016 U.S presidential election, in which Russian trolls used the service to spread so-called fake news and divisive content, it became clear that Facebook’s policies and moderation were letting too many things slip through the cracks.

So Facebook decided to beef up its content-review operation, pledging to have 20,000 employees working on safety- and security-related projects by the end of 2018. Bickert says Facebook already has 7,500 content reviewers worldwide, a mix of both full-time employees and contractors.

The company is also making plans to fight this stuff with technology. When asked about these topics earlier this month during his Congressional testimony, CEO Mark Zuckerberg routinely talked about the company’s use of artificial intelligence as a way that it hopes to better police user content.

That may be the case down the line, but Facebook still uses human moderators for the vast majority of its content decisions. AI works for removing known child pornography or terrorist beheading videos, for example, but isn’t used to determine what might be considered hate speech.

“There are some limited cases where the technology itself can remove the content without a person looking at it,” Bickert said. “By and large, most types of content policy violations — hate speech, bullying harassment, threats of harm — most of that has to be reviewed by people at this point because it is just so contextual.”

The new standards will roll out Tuesday to all Facebook users.

This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh