Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Facebook is hiring 3,000 people to stop users from broadcasting murder and rape

Mark Zuckerberg Delivers Keynote Address At Facebook F8 Conference
Mark Zuckerberg Delivers Keynote Address At Facebook F8 Conference
Photo by Justin Sullivan/Getty Images

In recent weeks, Facebook has faced a string of incidents where users have filmed shocking events — like rape and murder — and uploaded them to the site. Critics argued the company wasn’t doing enough to address the problem.

Today, Facebook CEO Mark Zuckerberg took action to address those complaints, announcing that the company was going to hire 3,000 people — on top of the 4,500 staff it already had — to help it respond more quickly to reports of abusive behavior in the platform.

It’s a laudable move. Facebook is betting that having thousands of additional bodies policing its platform will allow it to more quickly and effectively remove offensive content. If it works, it will illustrate something important about how big internet companies can deal with problems on their platforms.

For years, Twitter has faced criticism for the rampant abuse some of its users inflict on others. More recently, Facebook faced criticism for promoting fake news stories on its platform.

A common response has been that it’s too difficult to control what gets posted to a vast electronic platform. And that’s true if a company insists on taking an automated approach. But when a company really cares about addressing a problem like this, executives don’t restrict themselves to writing algorithms. If necessary, they hire thousands of human beings to apply human judgment to the problem.

Google reportedly has “a 10,000-strong army of independent contractors to flag offensive or upsetting content” in search results. One expert estimated that across the global internet, “well over 100,000” people, many of them low-paid workers in countries like the Philippines, are paid to police online content.

To be fair, Twitter said in 2015 that it was tripling the size of its staff handling abuse complaints. But Twitter didn’t say how many people that was. And judging from continued complaints about online abuse over the past 18 months, it evidently wasn’t enough.

As for Facebook’s fake news problem, it’s true that the situations aren’t strictly comparable. Determining whether a video contains graphic nudity or violence is easier than determining whether a news story is accurate. If Facebook wanted to improve the quality of news in the Newsfeed, it would probably have to hire more experienced and educated staffers — perhaps professional journalists — and think carefully about how to do it.

But the point here is that Facebook could be devoting vastly more resources to the problem if it cared about solving it. Until last year, for example, Facebook had a “trending news” section on the site that was edited by a team of 15 to 18 moderators — moderators Facebook laid off in the face of controversy about alleged left-wing bias.

One possible lesson from the incident could be that it just isn’t possible for human moderators to curate news stories on Facebook without sparking controversy from one or both ends of the political spectrum.

But another interpretation is that Facebook has drastically underestimated in the quality of the news articles promoted on the platform. An overworked and underpaid team was inevitably going to make mistakes that came back to haunt the company. If Facebook were as concerned about shoddy journalism as it is about offensive images, it would be devoting a lot more human resources to address the problem.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh