Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

The physicist Max Tegmark works to ensure that life has a future

From the return of nuclear war to the danger of AI weapons, Tegmark is tackling humanity’s greatest threats.

An illustration of the head and shoulders of Max Tegmark.
An illustration of the head and shoulders of Max Tegmark.
Rebecca Clarke for Vox

The physicist Max Tegmark works to ensure that life has a future

From the return of nuclear war to the danger of AI weapons, Tegmark is tackling humanity’s greatest threats.

Bryan Walsh
Bryan Walsh is a senior editorial director at Vox overseeing the climate teams and the Unexplainable and The Gray Area podcasts. He is also the editor of Vox’s Future Perfect section and writes the Good News newsletter. He worked at Time magazine for 15 years as a foreign correspondent in Asia, a climate writer, and an international editor, and he wrote a book on existential risk.

Most scientists, once they had delved deeply into the cosmological mysteries of our universe, would be satisfied. Where else is there to go once you’ve ranged from quantum coherence in neurons to the “Ultimate Ensemble Theory of Everything,” as the Swedish-born MIT physicist and cosmologist Max Tegmark has over the course of his career?

But Tegmark isn’t just concerned with the physical and mathematical structure of our world. He’s worried about whether it has a future at all.

Increasingly concerned about the growing threat from advanced artificial intelligence — a subject he covered in his highly readable (for a physics professor) book Life 3.0: Being Human in the Age of Artificial Intelligence — Tegmark in 2014 co-founded the Future of Life Institute (FLI), a nonprofit organization in Cambridge, Massachusetts, that focuses on reducing catastrophic and existential threats from technology. As Tegmark wrote at the launch of FLI, “The coming decades promise dramatic progress in technologies from synthetic biology to artificial intelligence, with both great benefits and great risks.” FLI’s mission would be to ensure that the promise would outweigh the threat.

In creating FLI, Tegmark joined a host of other institutions — such as Oxford’s Future of Humanity Institute and Cambridge’s Centre for the Study of Existential Risk — that arose in recent years to put both a scholarly and an activist lens on the rising danger of existential catastrophes.

But FLI forged a somewhat different path. It was US-based in a field that had begun in the UK, and was adjacent to both MIT and Harvard, giving it a close view on cutting-edge research in both artificial intelligence and synthetic biology, the nascent science of writing and rewriting the code of life.

FLI has also not been shy about the danger of “autonomous weapons” — AI-controlled armaments that represented what some theorists have called the third revolution in warfare, after gunpowder and nuclear weapons. With a savvy sense for media, the group released a pair of faux-documentary, quite scary videos — one in 2017 and one in 2021 — about the coming dangers of “slaughterbots,” robot weapons programmed to kill. Tegmark identified the unique dangers of such weapons, which could lower the barrier to using force precisely because “they’re so small and cheap that they can proliferate,” he told me in 2021.

But if Tegmark often looks to the threats of the future, he also keeps one eye on the past. From its founding, FLI has been loud about the dangers of nuclear war, an attitude that now seems prophetic as the actual use of nuclear weapons seems more possible than it has in decades. Beginning in 2017, FLI has handed out an annual “Future of Life” award, for the person who had done the most to improve the present, by preventing global catastrophe in the past. The first winner in 2017 was Vasili Arkhipov, a Soviet naval officer who during the Cuban missile crisis in 1962 prevented his sub from launching a nuclear warhead at blockading American ships, thus plausibly preventing what could have become World War III.

At the ceremony for Arkhipov, who died in 1998 with little recognition for his heroism, Tegmark said that he was “arguably the most important person in modern history” for his actions preventing a nuclear holocaust. Whether Tegmark himself will make a similar deep mark on history remains to be seen, but he won’t stop ringing the alarm bell.

Future Perfect
We’re asking the wrong question about the hantavirus outbreakWe’re asking the wrong question about the hantavirus outbreak
Future Perfect

The problem with hantavirus coverage isn’t the alarmism.

By Bryan Walsh
Future Perfect
“I’m disgusted to be a human”: What to do when you hate your own species“I’m disgusted to be a human”: What to do when you hate your own species
Future Perfect

Yes, it hurts to be human right now. That’s actually the assignment.

By Sigal Samuel
Future Perfect
The surprisingly strong case for feeling great about your coffee habitThe surprisingly strong case for feeling great about your coffee habit
Future Perfect

Your morning coffee is one of modern life’s underrated miracles.

By Bryan Walsh
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Future Perfect
The backlash to Billie Eilish’s vegan comments explains a lot about the American left (and everyone else)The backlash to Billie Eilish’s vegan comments explains a lot about the American left (and everyone else)
Future Perfect

Why are American leftists so reluctant to confront the meat industry?

By Kenny Torrella