Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Facebook has disabled almost 1.3 billion fake accounts over the past six months

Facebook will begin publishing more data about how many posts it takes down.

Facebook CEO Mark Zuckerberg, left, jokes with comedian Andy Samberg, who is dressed as and imitating Mark Zuckerberg
Facebook CEO Mark Zuckerberg, left, jokes with comedian Andy Samberg, who is dressed as and imitating Mark Zuckerberg
Real Mark Zuckerberg onstage with fake Mark Zuckerberg
Justin Sullivan / Getty

Facebook disabled nearly 1.3 billion “fake” accounts over the past two quarters, many of them bots “with the intent of spreading spam or conducting illicit activities such as scams,” the company said on Monday.

Facebook disabled 583 million accounts in Q1 2018, down from 694 million accounts in Q4 of last year, a decrease the company attributes to its “variability of our detection technology’s ability to find and flag them.”

Most of the accounts “were disabled within minutes of registration,” Facebook claimed in a blog post, but Facebook doesn’t catch all fake accounts. The company estimates that 3 percent to 4 percent of its monthly active users are “fake,” up from 2 percent to 3 percent in Q3 of 2017, according to filings documents.

Those numbers are big, a reminder of what Facebook is up against just 18 months after it was learned that a Russian troll farm used Facebook to try and influence the 2016 U.S. presidential election.

Facebook says it finds most of the accounts on its own using software algorithms, but a small percentage — about 1.5 percent of the disabled accounts — were discovered after they were flagged by Facebook users.

Facebook published the numbers for the first time on Tuesday, along with another set of numbers outlining the other kinds of content the company takes down on a regular basis.

Publishing the data is a way for Facebook to hold itself accountable, but it’s also a chance for Facebook to show users that it’s actually working on these problems in the background, something that’s not always obvious to the average user scrolling through her News Feed.

“This is the start,” said Guy Rosen, a Facebook product VP working on safety and security. “People can report a lot more types of bad things [than we are updating here.] So we want to have more numbers to share [next time].”

The numbers Facebook is sharing this time focus on major content categories. The company removed 21 million “pieces of adult nudity or porn,” for example, the vast majority of which was discovered using software programs. It also removed 2.5 million pieces of “hate speech,” 56 percent more content than the 1.6 million pieces it removed in Q4.

Unlike nudity or terrorism-related content, though, hate speech is still primarily discovered by humans, not software programs. Only 38 percent of the hate speech Facebook removed in Q1 was first identified by algorithms. That’s an improvement over 23.6 percent in Q4, but still much smaller than some of the other content categories Facebook looks for.

That makes sense, as “hate speech” is much more subjective than nudity. What one person might describe as hate speech, another might describe as free speech. The fact that Facebook still has trouble detecting it without human help shows that the problem won’t go away anytime soon.

“Hate speech is really hard,” said Alex Schultz, Facebook’s VP of analytics, in a briefing with reporters. “There’s nuance, there’s context. The technology just isn’t there to really understand all of that, let alone in a long long list of languages.”

Facebook has been working to win back the trust of its users ever since the 2016 election — and the more recent Cambridge Analytica privacy scandal in which user data was collected by an outside research firm without users’ consent.

Facebook rewrote its data policies, and also published the rulebook it uses for content policy decisions over the past few months. It plans to publish data around what types of posts it removes every six months or so moving forward.

“We hope we get better, but there is the interesting balance around what happens in the real world versus what happens on our site,” Schultz said. “It would be good for the world if wars ended, and I’m sure that would be good for the graphic violence number on Facebook. Also there could be another war breakout, and that would be terrible, and that would be bad for those numbers.”

“I think we should measure them well, and we should be good at explaining to you why they have moved,” he added.

This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh