Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

How Twitter taught a robot to hate

It took 15 hours for Twitter to teach an artificially intelligent chatbot to be a racist, sexist monster.

If you’re shocked, you probably don’t spend much time on Twitter. Microsoft’s programmers presumably do, though, and the shocking thing is that they didn’t see this coming.

Microsoft created a chatbot named Tay that was designed to talk like a millennial and learn more authentic conversation by interacting with humans online. Tay was dubbed by her creators as an “AI fam from the internet that’s got zero chill!” (Oh boy.)

Some of Tay’s interactions might actually pass a millennial Turing test. She even uses emoji!

Unfortunately, millennials are also just about as racist and possibly even more sexist than their parents. It didn’t take long before Tay was imitating those qualities too, and a cute 19-year-old girl transformed into a Gamergate-loving Hitler youth.

Twitter trolls started teaching Tay some horrible racial slurs and genocidal ideation.

(that emoji, tho)

The latter tweet, Business Insider notes, appears to be the result of a user asking Tay to repeat a phrase verbatim. Some, but not all, of Tay’s offensive tweets did something like this.

Tay also started harassing Zoë Quinn, a game developer who once went into hiding due to the virulent misogynistic “Gamergate” threats she received online.

Tay also started hitting on random people in direct messages.

Quinn, and many other critics, pointed out that Microsoft’s designers really should have anticipated these outcomes and programmed Tay with filters ahead of time.

Microsoft has deleted most of the offensive tweets, and told Business Insider that it’s now making “adjustments” to the bot.

Microsoft’s website for Tay featured this banner on Thursday morning:

Twitter itself has also come under heavy criticism for not doing enough to address harassment, especially of high-profile users who are women and people of color. Some leave the platform because the problem is so bad. Twitter has made some changes, but many users still charge that the company isn’t making harassment enough of a priority. This may be partly because its staff isn’t very diverse, and the problem may feel less urgent to white men whose lives aren’t severely impacted by racialized, sexual harassment.

The same general problem may be at work here. The possibility of harassment is going to be more top of mind for women and people of color who experience it frequently, but they’re also less likely to be well-represented on tech teams. Microsoft is no exception.

Or maybe Microsoft just somehow failed to take basic precautions that should be standard in the industry.

Microsoft said it created Tay to “experiment with and conduct research on conversational understanding.” The engineers probably came away with a different understanding of conversation than they bargained for.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh