Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

As big tech struggles to curb hate speech, civil rights groups have some recommendations

A new report from a coalition of six groups calls on tech to put more effort into coming down on hate activity online.

Vigil Held In Boston Ahead Of Large Rally Against Hate On Saturday
Vigil Held In Boston Ahead Of Large Rally Against Hate On Saturday
A group of protesters marching against hate groups in Boston
Photo by Spencer Platt/Getty Images
Shirin Ghaffary
Shirin Ghaffary was a senior Vox correspondent covering the social media industry. Previously, Ghaffary worked at BuzzFeed News, the San Francisco Chronicle, and TechCrunch.

Twitter, Facebook and YouTube have struggled to police hate speech on their networks for years. Now a group of civil rights and advocacy groups are weighing in with advice on how big tech services can better identify and remove hateful activity.

In a new report, six organizations — including Center for American Progress, Color of Change and Southern Poverty Law Center — have drafted a series of policy recommendations, such as:

  • Make it clear to users that engaging in hateful activities is grounds for termination.
  • Use both technology and human employees to help remove hateful activities.
  • Routinely test any technology that’s used to screen content for bias.
  • Allow users — but not government — a way to flag hateful content, and create a trusted flagger program for vetted human-rights organizations.
  • Provide the public with a regularly updated report summarizing hateful actions taken on its platforms and the company’s response.
  • Assign a board committee and a member of the executive team to oversee efforts to stop hateful activities.

The goal, of course, is to reduce the volume of hate content, but also make sure companies are more transparent around their efforts and success. “This is the first time we’ve formally put pen to paper, saying ‘this is kind of world we’d like to see’,” says Heidi Beirich, a spokesperson for Southern Poverty Law Center, which contributed to the report.

These groups don’t have any jurisdiction over the companies, but they are respected groups, and these seem like mostly solid recommendations. It’s worth noting that Google, Twitter and Facebook already do some of these things or include them in their terms of service. But it’s also worth noting that they are still far from maintaining hate-free communities.

The report’s authors met with representatives from those companies two weeks ago to share an early copy of their recommendations and solicit feedback. Google declined to comment on the report, and the other two companies did not immediately respond to a request for comment.

The coalition’s plan is to create a follow-up report next year that will assess tech companies’ performance against these recommendations.

This article originally appeared on Recode.net.

See More:

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh