Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Facebook exec: “We resisted having standards” on fake news. “That was wrong.”

Yet the company still thinks it would be “dangerous” for it to determine what is newsworthy.

JOSH EDELSON/AFP/Getty
Andrew Prokop
Andrew Prokop is a senior politics correspondent at Vox, covering the White House, elections, and political scandals and investigations. He’s worked at Vox since the site’s launch in 2014, and before that, he worked as a research assistant at the New Yorker’s Washington, DC, bureau.

CAMBRIDGE, Massachusetts — Four days after an election in which Facebook was increasingly criticized for helping “fake news” proliferate, company founder Mark Zuckerberg put up a defensive post in which he proclaimed— without citing a source — that “99% of what people see” on Facebook “is authentic.”

But when faced with a room full of journalists and political professionals at Harvard’s Campaign Managers Conference Wednesday night, Facebook executive Elliot Schrage had a very different message: Fake news is a problem, and we know we have to do something about it — though we aren’t yet sure what.

“For so long, we had resisted having standards about whether something’s newsworthy because we did not consider ourselves a service that was predominantly for the distribution of news. And that was wrong!” Schrage said during a panel on the media’s role in the election.

He added: “We have a responsibility here. I think we recognize that. This has been a learning for us.”

This would be a major shift for Facebook, which has insisted that it is a “technology company” and not a “media company.”

But so far, it’s unclear whether this shift will be more than a rhetorical one. Because Schrage — who’s Facebook’s vice president of global communications, marketing, and public policy — also signaled that the company still had very serious misgivings about what it can do.

“Until this election, our focus was on helping people share,” Schrage said. “This election forced us to question whether we have a role in assessing the validity of content people share. And I have to tell you all, and one of the reasons I came here — that’s a pretty damn scary role to play.”

“I think we need a ‘think before you share’ program”

When it came to specifics, though, Schrage expressed deep skepticism about two potential paths for the company.

First, he said Facebook was uninterested in hiring editors who would choose certain types of content to elevate in the newsfeed. “It is not clear to me that with 1.8 billion people around the world, lots of different users and lots of different languages, the smart strategy is to start hiring editors,” he said. “That’s just not what we do.”

Second, he said that a company taking it upon themselves to determine what’s “newsworthy” is “a very dangerous road.” And that’s fair enough — a great deal of content shared on Facebook of course isn’t intended to be “newsworthy” at all, and any suppression of certain topics would have a disturbing resemblance to censorship.

Schrage did say that Facebook already had tools with which its users could report fake news, but he acknowledged that they were “not well-done” and that those tools had to be improved.

Furthermore, he argued that even if Facebook did incorporate “signals” that certain brands have “higher quality” or might have more reliable factual information, it wouldn’t “solve the problem.”

Instead, Schrage seemed to prefer potential solutions that would nudge users to act differently without playing favorites among different sites or blacklisting them. “We’re in the business of giving users the power to share. Part of that is helping them share thoughtfully and responsibly, and consume thoughtfully and responsibly.”

“I think we need a ‘think before you share’ program so that people don’t share stuff that’s stupid,” he added. “On the left or on the right.”

It’s not really clear what this would entail — or whether it would work

Yet there are downsides to any user-centric approach. For instance, any tool that improves users’ ability to flag certain articles as “fake” could (and almost certainly would) be weaponized by committed users who merely dislike a certain article or media outlet.

Can Facebook encourage users to, for instance, click through before sharing an inflammatory headline on a story they haven’t read? Perhaps. But it’s unclear that would change much. As Schrage alludes, fake news spreads on Facebook because users enjoy it, and Facebook’s newsfeed algorithm wants users to see what they enjoy.

“Facebook’s algorithm prioritizes ‘engagement’ — and a reliable way to get readers to engage is by making up outrageous nonsense about politicians they don’t like,” Vox’s Tim Lee has written.

So when Schrage says he thinks the problem is primarily with user behavior, he’s reiterating that Facebook does not want to be in the business of deeming certain particular sites or outlets “fake” and punishing them somehow.

Still, the reality is that many websites have been created for the sole purpose of spreading entirely made-up news on Facebook, and that they have benefited greatly from Facebook’s algorithm.

If Facebook’s response to this is merely passing the ball to users, then, it risks becoming something akin to Twitter’s response to its harassment problem — a constant chorus of “we hear you” that never results in much substantive change.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh