Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

These AI bots created their own language to talk to each other

A next step in the development of artificial intelligence.

It is now table stakes for artificial intelligence algorithms to “learn” about the world around them. The next level: For AI bots to learn how to talk to each other — and develop their own shared language.

New research released last week by OpenAI, the artificial intelligence nonprofit lab founded by Elon Musk and Y Combinator president Sam Altman, details how they’re training AI bots to create their own language, based on trial and error, as the bots move around a set environment.

This is different from how artificial intelligence algorithms typically learn — using large sets of data, like to recognize a dog by taking in thousands of pictures of dogs.

The world the researchers created for the AI bots to learn in is a computer simulation of a simple, two-dimensional white square. There, the AIs, which took the shape of green, red and blue circles, were tasked with achieving certain goals, like moving to other colored dots within the white square.

But to get the task done, the AIs were encouraged to communicate in their own language. The bots created terms that were “grounded,” or corresponded directly with objects in their environment and other bots, and actions, like “Go to” or “Look at.” But the language the bots created wasn’t words in the way humans think of them — rather, the bots generated sets of numbers, which researchers labeled with English words.

You can get a sense in this demonstration video:

The researchers taught the AIs how to communicate using reinforcement learning: Through trial and error, the bots remembered what worked and what didn’t for the next time they were asked to complete a task. Igor Mordatch, one of the authors of the paper, will join the faculty at Carnegie Mellon in September. And Pieter Abbeel, the other author, is a research scientist at OpenAI and a professor at the University of California, Berkeley.

There are already AI assistants that can understand language, like Siri or Alexa, or help with translation, but this is mostly done by feeding language data to the AI, rather than understanding language through experience.

“We think that if we slowly increase the complexity of their environment, and the range of actions the agents themselves are allowed to take, it’s possible they’ll create an expressive language which contains concepts beyond the basic verbs and nouns that evolved here,” the researchers wrote in a blog post.

Why does this matter?

“Language understanding is super important to make progress on before AI reaches its full potential,” said Miles Brundage, an AI policy fellow at Oxford University, who also notes that OpenAI’s work represents a potentially important direction for the field of AI research to move toward.

“It’s not clear how good we can get at AI language understanding without grounding words in experience,” Brundage said, “and most work still looks at words in isolation.”


This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh