Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Facebook’s reliance on software algorithms keeps getting the company into trouble

The company is allowing advertisers to reach ‘Jew haters.’

Mark Zuckerberg Delivers Keynote Address At Facebook F8 Conference
Mark Zuckerberg Delivers Keynote Address At Facebook F8 Conference
Justin Sullivan / Getty

Update: Facebook announced late Thursday that it would temporarily stop letting advertisers target users with information like “field of study” or “college” until it figures out how to prevent inappropriate tags from surfacing.

“To help ensure that targeting is not used for discriminatory purposes, we are removing these self-reported targeting fields until we have the right processes in place to help prevent this issue,” the company wrote in a blog post.


One of Facebook’s biggest selling points to advertisers is that you can use the company’s vast data trove to target users based on almost any personal characteristic.

But the wide scope of Facebook’s targeting capabilities was revealed on Thursday when ProPublica discovered that you could target people using anti-Semitic phrases, including “Jew hater” and “How to burn jews.”

Slate did a quick follow-up and found that Facebook also enabled targeting for other hateful groups, like the “Ku-Klux-Klan.”

The way this works is that advertisers using Facebook’s automated ad buying software can target users based on specific information that they’ve added to their profile. Users can enter whatever they want on their profile under categories like field of study, school, job title or company. Facebook’s algorithm then surfaces these labels when ad buyers (or journalists) go looking for them.

In this case, users were entering things like “Jew hater” under “field of study,” which meant it showed up in the targeting search results and was an actual option for ad buyers.

Facebook issued a statement saying that it would remove the inappropriate categories, adding that the company “[has] more work to do” in preventing this kind of targeting from the site. [You can read the full statement below.]

But the issue was yet another example of what can happen when the algorithms that drive Facebook’s business and determine what you see, and don’t see, in News Feed, aren’t properly managed.

It’s been a bad year for Facebook algorithms, starting with a realization this spring that the company’s News Feed algorithm was abused to help spread misinformation during last year’s U.S. presidential election.

More recently, Facebook admitted that “inauthentic accounts” from Russia bought $100,000 worth of political advertising during the same U.S. election. The accounts were able to make the purchases because algorithms, and not humans, were approving and facilitating the transactions. (It’s still unknown if there are more, similar ads that are unaccounted for.)

Facebook may even have to testify before Congress because it unknowingly sold those ads.

This is all to say that Facebook’s algorithms are routinely abused, thus getting the company in trouble, and creating the risk that Facebook could lose the trust of its users and advertisers altogether.

As Recode’s co-founder Walt Mossberg tweeted on Thursday: “This is what happens when you turn your duty to uphold your own ‘standards’ over to crude algorithms. The standards obviously mean nothing.”

Facebook has not yet provided a solution to this problem. The company believes that these “write-in” categories can be important for advertisers trying to find niche audiences, assuming they aren’t abused or used to promote racism or sexism. But the internet has a habit of abusing things, and Facebook clearly hasn’t figured out a way to prevent it.

Here’s the company’s full statement, attributable to Rob Leathern, a product management director for Facebook:

“We don’t allow hate speech on Facebook. Our community standards strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes. However, there are times where content is surfaced on our platform that violates our standards. In this case, we’ve removed the associated targeting fields in question. We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”


This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh