Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Bill Gates: AI is like “nuclear weapons and nuclear energy” in danger and promise

In a Stanford keynote, Gates argues AI can transform medicine and education but warns of risks as well.

China International Import Expo (CIIE) - Day One
China International Import Expo (CIIE) - Day One
Bill Gates at a forum in Shanghai, China, in 2018.
Lintao Zhang/Getty Images
Kelsey Piper
Kelsey Piper is a contributing editor at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

On Monday, Stanford unveiled its new Stanford Institute for Human-Centered Artificial Intelligence. Bill Gates was the keynote speaker, and he spoke in more depth than he has in the last several years about both his fears for AI and his hopes for it.

“The world hasn’t had that many technologies that are both promising and dangerous” the way AI is, he said, according to the Stanford Daily. “We had nuclear weapons and nuclear energy, and so far so good.”

The comparison of AI — which today is mostly employed to show you ads, write stories, play games, and generate photographs — to nuclear weapons might seem overwrought to some. But many experts agree with Gates that it’s warranted. There is a substantial risk that we’ll design powerful AI systems that have unintended behavior — and if they’re deployed carelessly, those experts think we might drive our own species extinct.

Despite those risks, researchers are enthusiastically proceeding with exploration of AI system capabilities, with several significant breakthroughs in just the last few months. Why’s that? There are probably lots of incentives — profit, fame, internal competition — but one motivation is certainly the belief that AI has the potential to have enormous benefits commensurate with its enormous risks. Gates spoke to those aspirations, too.

So far, he said, as far as ways AI has benefitted society so far, “I won’t say there are that many.” But he sees potential — especially in the areas that he’s dedicated his post-Microsoft life to: health care, education, and global poverty.

He thinks AI can be used to identify promising drugs and speed up the drug-development process, transforming global health. In fact, he argues that AI is already doing that. “If you give kids in some countries an antibiotic once a year that costs two cents called azithromycin, it saves a hundred thousand lives,” Gates said. “I do not believe without machine learning techniques we [would have ever been] able to take the dimensionality of this problem to find the solution.”

He’s hopeful that AI can also transform the field of education, by making it easier for students to have personalized instructor time from AI assistant teachers. He’s hopeful there are insights about education that AI will help us uncover, too. “With everything we have learned about education, you could still say that the best teacher ever had lived 100 years ago,” Gates said. “You could not say that about doctors.” He thinks AI might change that.

The potential benefits mean that no one is going to stop working toward more advanced AI systems. The potential risks mean that it’s essential this be done carefully and responsibly, with a lot of thought put into international coordination, inter-organizational coordination, and policies aimed at ensuring AI is deployed safely and benefits all of humanity.

That fits well with the mission of Stanford’s new Institute for Human-Centered Artificial Intelligence, which aims to bring multidisciplinary expertise to bear on the challenges that AI poses. “It was obvious that not only would AI be foundational to the future — its development was suddenly, drastically accelerating,” co-directors Fei Fei Li and John Etchemendy wrote in an announcement about the new department. “We must study and forecast AI’s Human impact, and guide its development in light of that impact.”

Stanford University and Gates are in interestingly similar positions here. Both drove the field of computing forwards to where it stands today — Stanford with countless top researchers contributing to the development of AI, and Gates as a driver of personal computing at Microsoft. Both are now taking a look at what they’ve wrought — with some pride, but also some apprehension.

Both are now pivoting toward ensuring that technological progress does good rather than harm — nuclear energy, not nuclear weapons. As AI progress speeds along, it’s a more urgent priority than ever.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

See More:
Future Perfect
We’re asking the wrong question about the hantavirus outbreakWe’re asking the wrong question about the hantavirus outbreak
Future Perfect

The problem with hantavirus coverage isn’t the alarmism.

By Bryan Walsh
Future Perfect
“I’m disgusted to be a human”: What to do when you hate your own species“I’m disgusted to be a human”: What to do when you hate your own species
Future Perfect

Yes, it hurts to be human right now. That’s actually the assignment.

By Sigal Samuel
Future Perfect
The surprisingly strong case for feeling great about your coffee habitThe surprisingly strong case for feeling great about your coffee habit
Future Perfect

Your morning coffee is one of modern life’s underrated miracles.

By Bryan Walsh
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Future Perfect
The backlash to Billie Eilish’s vegan comments explains a lot about the American left (and everyone else)The backlash to Billie Eilish’s vegan comments explains a lot about the American left (and everyone else)
Future Perfect

Why are American leftists so reluctant to confront the meat industry?

By Kenny Torrella