Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Elon Musk’s nonprofit can help AI systems get smarter — even if their developers have bad intentions

Universe is a new AI training center that is supposed to teach computers to think more like humans.

Obama Outlines Policy For Open And Free Internet
Obama Outlines Policy For Open And Free Internet
Michael Bocchieri / Getty

OpenAI, the nonprofit backed by Elon Musk and Peter Thiel to promote artificial intelligence that helps rather than harms humanity, opened a new virtual training center on Monday. It’s called Universe, and anyone building artificial intelligence programs can use it.

With Universe, developers can train artificial intelligence applications with games, websites, web browsers and other apps. The idea here is that the more an AI system practices using interfaces designed for human users, the more human-like AI can become.

But since Universe is open for anyone to use, that leaves the door open to developers who may utilize Universe to train AI in a way that would beget harm — precisely what Musk’s nonprofit aims to prevent. Opening developer tools is common in artificial intelligence programming, so the new initiative falls within standard practice.

Musk’s nonprofit is committed to open sourcing its tools and research as a way of hedging against the possibility of centralized, monopoly power over how artificial intelligence advances. Google opened its DeepMind artificial intelligence training tools on Monday, too. And earlier this year, OpenAI released another open platform for training algorithms in complex environments called Gym.

This openness, however, potentially runs counter to Open AI’s purpose: To prevent Skynet from becoming self-aware.

“As our algorithms grow more sophisticated and our environments grow, we will be carefully thinking how to ensure people train AIs to ensure they have a good understanding of ethics, responsibility and culpability,” a spokesperson from OpenAI told Recode. The team explored some of these issues in a recent paper and says they’re already working to build their own safe and secure systems.

Still, that doesn’t answer the question of how they plan to prevent developers from using OpenAI’s free tools to build potentially unsafe and unethical artificial intelligence programs.

Universe aims to make AI better by becoming more adept at “general intelligence,” a concept within the AI community in which an AI learns a broader array of tasks, rather than being designed for one specific purpose.

Take Google’s AlphaGo, for example, the deep learning program that taught itself the ancient strategy game Go and defeated the best human player in the world earlier this year. AlphaGo’s win was considered a huge milestone in the development of artificial intelligence, but it wasn’t a demonstration of “general intelligence,” according to the team at OpenAI.

Since the purpose of Universe is to edge AI closer to human-level intelligence, it uses virtual training environments that simulate how humans use computers by navigating the training exercises with mouse clicks and keyboard strokes.

Universe’s style of training might amount to better and smarter artificially intelligent programs, but that doesn’t mean it will necessarily amount to more human-like AI.

OpenAI partnered with large game makers, like Microsoft and Valve, to provide about a thousand different video games for developers to train with. There are also environments for AI programs to practice using web browsers or spreadsheets, design with CAD or use a photo editing program.

This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh