Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Elon Musk-Backed Group Attempts to Avert Judgment Day With AI Rules

A research group aims to build a practical and ethical framework for AI -- and wrest the story from the Terminator.

Skydance Productions

The Terminator is back, and with him, the running quips (usually tongue-halfway-in-cheek) that the machines are getting closer and closer to taking over.

A consortium of artificial intelligence researchers is trying to wrest that narrative away, while laying the foundation for technical and ethical ground rules in the field.

Earlier this week, the Future of Life Institute, a Boston-based group, doled out 37 grants to researchers with projects focused on “keeping AI robust and beneficial.” They range from the highly academic — building probabilistic models for AI software — to the vividly abstract — a philosophic framework for the “human control” of autonomous weapons. The bulk of the grants came from Elon Musk, the Tesla and SpaceX founder, who has pledged $10 million in support.

The grants happened to coincide with the release of “Terminator Genisys,” the newest entrant in the movie franchise created by James Cameron.

“The danger with the Terminator scenario isn’t that it will happen,” Max Tegmar, the FLI president and an MIT physicist, said in a statement, “but that it distracts from the real issues posed by future AI.”

Daniel Dewey, the program officer for grants at FLI, added more context. While the group welcomes the attention the blockbuster brings to AI, they aren’t so pleased with the obsession around Skynet. “I’m glad that science fiction exists so that people get interested in the future. But we are pushing back to a certain extent,” he told Re/code. “We’re interested in creating the research for the problems that do exist.”

Problems around super-intelligent or weaponized robots may arise and compound in the near future, Dewey added, but for now, their emergence is not one of them; we won’t soon have super-smart gun-toting robots.

Instead, the group is aiming to develop criteria for engineering best practices and ethical rules when universities, companies and individuals tool around with advanced machinery.

The foundation gave $136,000 to a University of Denver researcher, Heather Roff, to investigate the deployment of AI-enhanced weaponry (an issue the United Nations has on its radar). Another $200,000 went to an initiative around AI cyber security. A quarter million is set aside for a philosophical project with the audacious title “Aligning Superintelligence With Human Interests.”

“It explicitly motivated research to address problems of reliability and safety and beneficialness, for lack of a better word, before it gets powerful,” Dewey said. “And we think it’s best to do this ahead of time. We don’t want to be thinking about autonomous weapons only when we’re making them.”

Recently, the world’s Internet companies have poured considerable resources into AI and machine learning, as computing power and technical capabilities are starting to catch up with the ambitious futuristic visions of tech founders.

In January, the FLI unfurled its manifesto — along with Musk’s funding pledge — during a convention in Puerto Rico. The open letter was signed by most of the industry’s luminaries, including the research directors of Google and Microsoft; Yann LeCun, the head of Facebook’s AI lab; and Geoffrey Hinton, who leads an AI division within Google. Other co-signers included the three co-founders of DeepMind, the deep learning company Google bought last year.

When they were acquired, DeepMind reportedly insisted that Google set up an internal ethics board around AI.

This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh