Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Here’s how Facebook plans to fix its fake-news problem

It’s a rough plan, but a plan nonetheless.

New Samsung S7 Worldwide Unveiling
New Samsung S7 Worldwide Unveiling
David Ramos / Getty

Facebook CEO Mark Zuckerberg doesn’t think fake news influenced last week’s presidential election. But it turns out he does think fake news is a problem on Facebook, and late Friday night he laid out a few details about a number of “projects we already have under way” to stop the spread of fake news on Facebook in the future.

The general takeaway from the lengthy Facebook post is that the social network plans to be more proactive in identifying and removing fake news articles from users’ feeds moving forward. Until now, it has primarily relied on users to report and flag inaccurate stories.

That will still be possible, of course, but Zuckerberg outlined a number of other updates that are apparently in the works. A few of the potential changes:

  • Adding a warning label to stories that users have flagged as inaccurate.
  • Working with more third-party fact-checking organizations.
  • Improving the accuracy of “related articles” that it suggests for users to read.
  • Blocking fake news distributors from paying to promote their content. (Facebook started that process this week.)
  • Building better algorithms to automatically detect fake news. “This means better technical systems to detect what people will flag as false before they do it themselves,” Zuckerberg wrote.

Zuckerberg did not say when these updates will be active or available, but did stress that it won’t be a simple fix. “Some of these ideas will work well, and some will not,” he wrote.

Facebook has been under fire this week after reports found that fake news stories may have played a larger role in last week’s election than anybody thought. Zuckerberg has denied on multiple occasions that fake news played a meaningful role in determining the election’s outcome. But his post Friday shows just how big of a problem Zuckerberg thinks misinformation really is.

Despite the looming changes, Zuckerberg emphasized that Facebook will have to walk a fine line between policing its feed for fake news and not infringing upon personal opinions and free speech.

“The problems here are complex, both technically and philosophically,” he wrote. “We believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible. We need to be careful not to discourage sharing of opinions or mistakenly restricting accurate content. We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties.”

Facebook has long argued that it’s not a media company, but that it’s a technology platform that simply carries information. But the truth of the matter is Facebook and its algorithms determine what news articles hundreds of millions of people see around the world each day. That brings with it some ethical responsibilities as well.

“The bottom line is: we take misinformation seriously. Our goal is to connect people with the stories they find most meaningful, and we know people want accurate information,” Zuckerberg wrote. “We understand how important the issue is for our community and we are committed to getting this right.”

This article originally appeared on Recode.net.

More in Technology

America, Actually
Inside the fight over America’s data centersInside the fight over America’s data centers
Podcast
America, Actually

“The ugliest thing I’ve ever seen”: How New Jersey residents feel about a data center in their backyard.

By Astead Herndon
Podcasts
Could you spot an AI-written book?Could you spot an AI-written book?
Podcast
Podcasts

An author set up an experiment to find out.

By Amina Al-Sadi and Noel King
Future Perfect
The 5 most unhinged revelations from Elon Musk’s lawsuit against OpenAIThe 5 most unhinged revelations from Elon Musk’s lawsuit against OpenAI
Future Perfect

The Musk v. OpenAI trial is over. Here are the receipts.

By Sara Herschander
Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander