Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Facebook’s chief security officer let loose at critics on Twitter over the company’s algorithms

Stamos is a key player in Facebook’s effort to understand Russian election meddling.

Facebook executives don’t usually say much publicly, and when they do, it’s usually measured and approved by the company’s public relations team.

Today was a little different. Facebook’s chief security officer, Alex Stamos, took to Twitter to deliver an unusually raw tweetstorm defending the company’s software algorithms against critics who believe Facebook needs more oversight.

Facebook uses algorithms to determine everything from what you see and don’t see in News Feed, to finding and removing other content like hate speech and violent threats. The company has been criticized in the past for using these algorithms — and not humans — to monitor its service for things like abuse, violent threats, and misinformation.

The algorithms can be fooled or gamed, and part of the criticism is that Facebook and other tech companies don’t always seem to appreciate that algorithms have biases, too.

Stamos says it’s hard to understand from the outside.

“Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks,” Stamos tweeted. “My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.”

Stamos’s thread is all the more interesting given his current role inside the company. As chief security officer, he’s spearheading the company’s investigation into how Kremlin-tied Facebook accounts may have used the service to spread misinformation during last year’s U.S. presidential campaign.

The irony in Stamos’s suggestion, of course, is that most Silicon Valley tech companies are notorious for controlling their own message. This means individual employees rarely speak to the press, and when they do, it’s usually to deliver a bunch of prepared statements. Companies sometimes fire employees who speak to journalists without permission, and Facebook executives are particularly tight-lipped.

This makes Stamos’s thread, and his candor, very intriguing. Here it is in its entirety.

  1. I appreciate Quinta’s work (especially on Rational Security) but this thread demonstrates a real gap between academics/journalists and SV.
  2. I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos.
  3. Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks.
  4. In fact, an understanding of the risks of machine learning (ML) drives small-c conservatism in solving some issues.
  5. For example, lots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda.
  6. Without considering the downside of training ML systems to classify something as fake based upon ideologically biased training data.
  7. A bunch of the public research really comes down to the feedback loop of “we believe this viewpoint is being pushed by bots” -> ML
  8. So if you don’t worry about becoming the Ministry of Truth with ML systems trained on your personal biases, then it’s easy!
  9. Likewise all the stories about “The Algorithm”. In any situation where millions/billions/tens of Bs of items need to be sorted, need algos
  10. My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.
  11. And to be careful of their own biases when making leaps of judgment between facts.
  12. If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased
  13. If your piece assumes that a problem hasn’t been addressed because everybody at these companies is a nerd, you are incorrect.
  14. If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful. Really common.
  15. If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad
  16. Likewise if your call for data to be protected from governments is based upon who the person being protected is.
  17. A lot of people aren’t thinking hard about the world they are asking SV to build. When the gods wish to punish us they answer our prayers.
  18. Anyway, just a Saturday morning thought on how we can better discuss this. Off to Home Depot. FIN

This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh