Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

YouTube says computers helped it pull down millions of objectionable videos in three months

Software helped YouTube flag 4.5 million videos before anyone ever saw them. But another 1.5 million got through, at least briefly.

YouTube CEO Susan Wojcicki
YouTube CEO Susan Wojcicki
YouTube CEO Susan Wojcicki
Asa Mathat
Peter Kafka
Peter Kafka covered media and technology, and their intersection, at Vox. Many of his stories can be found in his Kafka on Media newsletter, and he also hosts the Recode Media podcast.

Bad news for YouTube: Last quarter, users uploaded millions of objectionable videos to the world’s largest video site.

Good news for YouTube: Last quarter, the site pulled down millions of objectionable videos before any users saw them.

The news YouTube wants you (and investors and advertisers) to focus on: It is successfully training computers to flag objectionable videos, because that’s the only way it will be able to sort out the bad stuff from the site’s enormous crush of clips.

All of that comes from a brief aside in Google CEO Sundar Pichai’s scripted remarks during parent company Alphabet’s earnings call today. He said YouTube had pulled down more than six million videos in the last quarter of 2017, after first being flagged by its “machine systems,” and that 75 percent of those videos “were removed before receiving a single view.”

(Update: YouTube has published a blog post with more complete numbers. It says it pulled down more than eight million videos in the quarter — “mostly spam or people attempting to upload adult content” — and that 6.7 million were flagged by computers first. It also says it is getting better at finding videos with “violent extremism”: At the beginning of 2017, 8 percent of the videos with that content were taken down before they got to 10 views. “Now more than half of the videos we remove for violent extremism have fewer than 10 views.”)

Context: YouTube has spent the past year responding to complaints about problematic (or worse) videos on the site. Sometimes it has argued that offensive videos — or, at least, offensive videos running alongside advertising — are a “tiny, tiny” problem; at other times, it says it is taking it so seriously that it will have more than 10,000 people working on the problem this year.

But like Facebook, Google and YouTube execs believe that the company’s scale means that it will ultimately need to rely on computers and artificial intelligence to solve its content problem. Hence Pichai’s earnings call commentary, which is (obviously) meant to be encouraging.

The glass-half-empty argument would go something like this: YouTube’s massive scale, combined with its platform philosophy — let users upload whatever they want, then review it when necessary — means that it will never be able to get a complete handle on problematic clips.

Another way of putting it: YouTube says it intercepted some 4.5 million bad videos before anyone saw them. But some of its billion-plus users saw another 1.5 million clips — in just three months — that ultimately needed to be pulled off the site. And even if most of those clips only generated a few views, any one of them has the potential to upset a user or an advertiser — or a government official who thinks the site needs more oversight.

This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh