Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Trump wants social media to detect mass shooters before they commit crimes

What’s more likely is that all sorts of speech — and people — would get swept up in the technology dragnets Trump seems to be proposing.

US President Donald Trump seen through the viewfinder of a television camera.
US President Donald Trump seen through the viewfinder of a television camera.
Trump wants social media companies to work “in partnership” with law enforcement to find mass shooters ahead of time. That’s easier said than done.
SAUL LOEB/AFP/Getty Images
Rani Molla
Rani Molla was a senior correspondent at Vox and has been focusing her reporting on the future of work. She has covered business and technology for more than a decade — often in charts — including at Bloomberg and the Wall Street Journal.

After mass shootings in both Dayton, Ohio, and El Paso, Texas, this weekend, President Donald Trump called on government organizations as well as social media companies to “develop tools that can detect mass shooters before they strike.”

Social media platforms like Facebook, YouTube, and Twitter are already detecting and deleting terrorist content. What’s new is that Trump’s statement specifically called for them to work “in partnership” with the Department of Justice and law enforcement agencies. The president’s comments have prompted questions about how this partnership would work, whether it would be effective, and what impact it could have on Americans’ civil liberties.

It’s also not clear whether social media companies will start seeking to identify the warning signs for a mass shooter before someone makes direct threats, in order to alert authorities.

Recode contacted the White House to clarify whether Trump’s statement means he’s asking social media companies to proactively report potential domestic terrorists but did not receive a response.

Facebook pointed us to its Community Standards Enforcement Report, which says the company already notifies law enforcement in cases of a “specific, imminent and credible threat to human life.” YouTube and Twitter also pointed us to their community guidelines but didn’t comment on whether they would proactively work with law enforcement to alert them to potential mass shooters.

“I think trying to use automated tools to predict [mass shootings] is a bad idea and wouldn’t work nearly as well as people who think tech is magic would like you to think it would,” Electronic Frontier Foundation Technology Projects Director Jeremy Gillula told Recode. “Tech is not a magic solution to society’s problems. You have to fix society at large.”

The FBI seems to be working on developing a tech solution quite like that. Earlier this month, the FBI put out a request for proposals for a “social media early alerting tool in order to mitigate multifaceted threats.” The proposal explains that the tool would “proactively identify and reactively monitor” social media to “enable the Bureau to detect, disrupt, and investigate an ever growing diverse range of threats to U.S. National interests.”

If such a tool is developed and used as Trump has suggested — to try to predict mass shooters before they act — it’s unlikely that it would work.

As Vox’s Brian Resnick and Javier Zarracina demonstrated, however commonplace mass shooters in the US may seem they are still very rare, and even a made-up model with 99 percent accuracy would not be enough to effectively pinpoint mass shooters in a population of 320 million people.

As Ben Wizner, a lawyer for the ACLU, put it, “The problem with that is we don’t yet have the tech to determine pre-crime, Minority Report notwithstanding. We need to understand that even if all mass shooters have said X, the vast majority of people who have said X don’t become mass shooters.”

What’s more likely to happen is that all sorts of speech — and people — would get swept up in the technology dragnets Trump seems to be proposing.

“It is possible that there are certain signals that let you know if an attack is happening,” Heidi Beirich, director of the Intelligence Project at the Southern Poverty Law Center, told Recode. “The question is, Can you look for those things and still guarantee civil and constitutional rights?”

She added, “Obviously the FBI does need to do some sort of scouring of extremist sites, but this is going to have to be very carefully conducted if we want to protect civil rights and civil liberties. When I think of Trump, I don’t think of him as the kind of person who’s going to do that.”

While the government is required to protect free speech — unless it directly incites violence — tech companies are under no such obligation. Social media platforms employ a mix of artificial intelligence and human moderators to identify terrorist content and remove it, with mixed results.

Facebook uses artificial intelligence, machine learning, and computer vision to find and delete terrorist content before it’s reported by users — though after it’s been posted to Facebook. The company also uses technology to identify potentially suicidal people based on what they post on Facebook and how their family and friends react to those posts. Human moderators then review posts flagged by this method and decide whether they should send the poster support options. In serious cases, Facebook says it contacts authorities to do wellness checks.

Similarly, YouTube relies on a combination of reported and automated mechanisms to flag terrorist content, which is then reviewed and deleted by humans. The company also takes steps to make sure such content doesn’t spread.

Twitter used to rely on user reports to take down extremist content, but increasingly is leveraging software to do so. In its latest reporting period, Twitter suspended 166,513 unique accounts for violations related to promotion of terrorism — more than 90 percent of which were surfaced using “internal, proprietary tools.”

Facebook, YouTube, and Twitter, among other tech companies, founded the Global Internet Forum to Counter Terrorism in 2017, with the mission to “substantially disrupt terrorists’ ability to promote terrorism, disseminate violent extremist propaganda, and exploit or glorify real-world acts of violence using our platforms.”

The path between stopping their posts and stopping their actions, however, is far from clear.

Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh