Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Twitter says it’s fixed a ‘bug’ that allowed ad campaigns to target users with derogatory terms

In a statement, the company says it will “continue to strongly enforce our policies.”

Twitter CEO Jack Dorsey onstage
Twitter CEO Jack Dorsey onstage
The Verge

Twitter said today it had fixed a “bug” in its platform that could have allowed advertisers to target users with racial epithets and terms like “Nazi.”

The change follows a report by the Daily Beast — which found that potential ad campaigns using those derogatory terms could have reached millions on the site — and a broader controversy this week about inappropriate algorithmic ad targeting on big internet platforms.

“We determined these few campaigns were able to go through because of a bug that we have now fixed,” a spokeswoman said in a statement. “Twitter prohibits and prevents ad campaigns involving offensive or inappropriate content, and we will continue to strongly enforce our policies.”

Earlier this week, Facebook faced its own barrage of criticism after ProPublica discovered the social giant allowed advertisers to target users based on categories like “Ku-Klux-Klan” and “Jew hater.” It has since similarly implemented changes to its targeting platform.

And Google for a time also appeared to allow ad campaigns based on racist or otherwise hate-inspired terms, BuzzFeed found, prompting the search giant to do its own fine-tuning last week.

This has spurred a new round of debate as to whether these giant internet companies — which many feel already have too much control — should add more proactive human oversight to their algorithms, especially to filter inappropriate and hateful speech.

On one hand, more human involvement, earlier, could prevent embarrassing discoveries like these, and potentially worse outcomes. On the other, trying to form — and police — a line of what’s appropriate is highly imperfect, and puts even more control in the hands of the platform companies, which tend to prefer to hide behind their algorithms.


This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh