Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

We Can All Learn Something From How Quickly Microsoft’s Chatbot Turned Into a Racist

Racism, sexism and xenophobia are all too easily learned.

Microsoft

There’s an important lesson for us humans in just how quickly Microsoft’s chatbot learned how to spew out racist, sexist and other hateful messages.

For those who missed it, yesterday Microsoft turned on Tay, a chatbot designed to converse with, and mimic the speech patterns of, millennials. But, in less than a day, Microsoft was forced to take Tay offline as the bot started sending offensive messages.

Though Tay was apparently influenced by intentional hate speech, the fact is so are humans — and from an early age. Racism, sexism and xenophobia in general are all learned behaviors that are challenging to un-learn.

It is hard to blame Tay for quickly picking up on hate when we live in a world where Donald Trump can spew anti-Muslim rhetoric and still be a major party’s front-runner. Meanwhile, on Tuesday, North Carolina managed to propose, pass and enact legislation stripping civil rights from an entire group of people.

Microsoft isn’t the first to struggle in this area. IBM taught Watson the entire Urban Dictionary but quickly decided its computer would be better off not knowing everything.

So, yes, Microsoft was right to take Tay offline.

“Phew. Busy day. Going offline for a while to absorb it all. Chat soon,” Tay says in a message on its website.

Instead of teaching Tay to mimic humanity, Microsoft is going to have to teach it to be better than humanity, to filter out our worst inclinations and focus on our better selves. Luckily for Microsoft, it is likely a matter of tweaking a few algorithms and adding more filters.

If only changing humans were that easy.

This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh