Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Facebook is banning deepfake videos

Facebook’s new rules will still allow controversial fake videos like the one of Nancy Pelosi that made her appear to be drunk.

A picture of Facebook CEO Mark Zuckerberg with lines imposed on his face that trace his facial characteristics.
A picture of Facebook CEO Mark Zuckerberg with lines imposed on his face that trace his facial characteristics.
A comparison between an original and deepfake video of Facebook CEO Mark Zuckerberg was uploaded to the social media platform last year.
Elyse Samuels/The Washington Post via Getty Images
Shirin Ghaffary
Shirin Ghaffary was a senior Vox correspondent covering the social media industry. Previously, Ghaffary worked at BuzzFeed News, the San Francisco Chronicle, and TechCrunch.
Open Sourced logo

Facebook announced late Monday that it would ban “deepfakes,” which are AI-manipulated videos that distort reality, often simulating real people in fake situations.

The social media giant announced the changes in a company executive blog post, saying it will remove deepfakes and other types of heavily manipulated media from its platform.

Specifically, the company laid out two main criteria for removing content under the new rules. The first is that the company will remove content posted on Facebook if has been edited in ways that would “likely mislead someone into thinking a subject of the video said words that they did not actually say,” according to the post written by Monika Bickert, Facebook’s vice president of global policy management. Secondly, the platform will ban media if it’s the product of AI or machine learning that “merges, replaces, or superimposes content onto a video, making it appear to be authentic.”

Facebook came under fire last year for allowing a manipulated video of Speaker Nancy Pelosi that made it appear as though she was drunk by altering her speech to slur her words. At the time, Facebook said the video went through its fact-checking process, which does not require content to be true to be allowed on the platform. The company said it displayed a note with additional context about the video, telling users that it was false.

Under its new rules, Facebook told Recode it still would not take down the Pelosi video, saying that it does not meet the standards of the new policy. “Only videos generated by artificial intelligence to depict people saying fictional things will be taken down. Edited or clipped videos will continue to be subject to our fact-checking program. In the case of the Pelosi video, once it was rated false, we reduced its distribution,” the spokesperson told Recode.

Whether videos are deepfakes or not, they’re all subject to Facebook’s fact-checking system. If content is proven to be false, it can be flagged with a note labeling the content as such, and Facebook will deprioritize it in its News Feed.

In an email, Omer Ben-Ami, the co-founder of Canny AI (the Israeli advertising startup that last year helped artists produce a viral deepfake of Zuckerberg on Instagram, which Facebook opted to keep up) said Facebook’s new policy seemed “reasonable.” However, he cautioned that his company and others, “use this technology for legitimate reasons, mainly for personalization and localization of content.”

He said it was unclear why the policy only applies to content manipulated by artificial intelligence.

Overall, there are some exceptions to Facebook’s new rules: They don’t apply to videos that are parody or satire, nor do they ban videos edited “solely to omit or change the order of words” someone is saying.

The change builds on previous efforts Facebook has made in combating deepfakes. Last fall, the company helped launch a “Deepfake Detection Challenge” that’s meant to accelerate global research into technology that can identify misleading AI-manipulated videos. The company also began an initiative with Reuters that’s meant to train journalists to better spot manipulated media, including deepfakes.

“As the tech develops, so do policies, and we hope Facebook’s policymakers will make sure to make the distinction between content that was legitimately manipulated and malicious content,” added Ben-Ami.

Additional reporting by Rebecca Heilweil.


Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh