Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Facebook is temporarily removing the ad targeting option that let marketers reach ‘Jew haters’

Facebook plans to fix the feature, but it’ll shut it down until it figures out how.

Facebook CEO Mark Zuckerberg
Facebook CEO Mark Zuckerberg
David Ramos / Getty

Facebook is temporarily removing some of its ad targeting options in the wake of reports that the site was being used to target people based on categories like “Ku-Klux-Klan” and “Jew hater.”

Reports surfaced Thursday that Facebook’s ad targeting algorithm was surfacing these inappropriate categories to advertisers. The problem is that Facebook allows advertisers to target people based on self-reported categories like field of study, school, job title or company. When users put “Jew hater” as their field of study, for example, that label was then appearing to advertisers as a legitimate way to target people with ads.

Facebook says it wants to fix that targeting process, but in the meantime, it will eliminate the ability to target users based on theses categories while it figures out what to do next.

“To help ensure that targeting is not used for discriminatory purposes, we are removing these self-reported targeting fields until we have the right processes in place to help prevent this issue,” Facebook wrote in a blog post late Thursday night. “We want Facebook to be a safe place for people and businesses, and we’ll continue to do everything we can to keep hate off Facebook.”

This is not the first time one of Facebook’s software algorithms has been abused in the past year. Its News Feed algorithm, which determines what people see and don’t see in their feed, was gamed to help spread misinformation ahead of last fall’s U.S. election. More recently, it was discovered that “inauthentic accounts” from Russia bought $100,000 worth of political ads during the same election cycle.


This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh