Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Who do you believe about the end of the world?

The Doomsday Clock is keeping time for a world that no longer exists.

2026 Dooms Day Clock Announcement
2026 Dooms Day Clock Announcement
Courtesy of the Bulletin of the Atomic Scientists
Bryan Walsh
Bryan Walsh is a senior editorial director at Vox overseeing the climate teams and the Unexplainable and The Gray Area podcasts. He is also the editor of Vox’s Future Perfect section and writes the Good News newsletter. He worked at Time magazine for 15 years as a foreign correspondent in Asia, a climate writer, and an international editor, and he wrote a book on existential risk.

Not everyone wants to rule the world, but it does seem lately as if everyone wants to warn the world might be ending.

On Tuesday, the Bulletin of the Atomic Scientists unveiled their annual resetting of the Doomsday Clock, which is meant to visually represent how close the experts at the organization feel that the world is to ending. Reflecting a cavalcade of existential risks ranging from worsening nuclear tensions to climate change to the rise of autocracy, the hands were set to 85 seconds to midnight, four seconds closer than in 2025 and the closest the clock has ever been to striking 12.

The day before, Anthropic CEO Dario Amodei — who may as well be the field of artificial intelligence’s philosopher-king — published a 19,000-word essay entitled “The Adolescence of Technology.” His takeaway: “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it.”

Related

Should we fail this “serious civilizational challenge,” as Amodei put it, the world might well be headed for the pitch black of midnight. (Disclosure: Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content.)

Related

As I’ve said before, it’s boom times for doom times. But examining these two very different attempts at communicating existential risk — one very much a product of the mid-20th century, the other of our own uncertain moment — presents a question. Who should we listen to? The prophets shouting outside the gates? Or the high priest who also runs the temple?

Tick, tock

The Doomsday Clock has been with us so long — it was created in 1947, just two years after the first nuclear weapon incinerated Hiroshima — that it’s easy to forget how radical it was. Not just the Clock itself, which may be one of the most iconic and effective symbols of the 20th century, but the people who made it.

The Bulletin of the Atomic Scientists was founded immediately after the war by scientists like J. Robert Oppenheimer — the very men and women who had created the bomb they now feared. That lent an unparalleled moral clarity to their warnings. At a moment of uniquely high levels of institutional trust, here were people who knew more about the workings of the bomb than anyone else, desperately telling the public that we were on a path to nuclear annihilation.

The Bulletin scientists had the benefit of reality on their side. No one, after Hiroshima and Nagasaki, could doubt the awful power of these bombs. As my colleague Josh Keating wrote earlier this week, by the late 1950s there were dozens of nuclear tests being conducted around the world each year. That nuclear weapons, especially at that moment, presented a clear and unprecedented existential risk was essentially inarguable, even by the politicians and generals building up those arsenals.

But the very thing that gave the Bulletin scientists their moral credibility — their willingness to break with the government they once served — cost them the one thing needed to end those risks: power.

As striking as the Doomsday Clock remains as a symbol, it is essentially a communication device wielded by people who have no say over the things they’re measuring. It’s prophetic speech without executive authority. When the Bulletin, as it did on Tuesday, warns that the New START treaty is expiring or that nuclear powers are modernizing their arsenals, it can’t actually do anything about it except hope policymakers — and the public — listen.

And the more diffuse those warnings become, the harder it is to be heard.

Since the end of the Cold War took nuclear war off the agenda — temporarily, at least — the calculations behind the Doomsday Clock have grown to encompass climate change, biosecurity, the degradation of US public health infrastructure, new technological risks like “mirror life,” artificial intelligence, and autocracy. All of these challenges are real, and each in their own way threatens to make life on this planet worse. But mixed together, they muddy the terrifying precision that the Clock promised. What once seemed like clockwork is revealed as guesswork, just one more warning among countless others.

The insider

Even more than most AI leaders, Amodei has frequently been compared to Oppenheimer.

Amodei was a physicist and a scientist first. Amodei did important work on the “scaling laws” that helped unlock powerful artificial intelligence, just as Oppenheimer did critical research that helped blaze the trail to the bomb. Like Oppenheimer, whose real talent lay in the organizational abilities required to run the Manhattan Project, Amodei has proven to be highly capable as a corporate leader.

And like Oppenheimer — after the war at least — Amodei hasn’t been shy about using his public position to warn in no uncertain terms about the technology he helped create. Had Oppenheimer had access to modern blogging tools, I guarantee you he would have produced something like “The Adolescence of Technology,” albeit with a bit more Sanskrit.

This story was first featured in the Future Perfect newsletter.

Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them.

The difference between these figures is one of control. Oppenheimer and his fellow scientists lost control of their creation to the government and the military almost immediately, and by 1954 Oppenheimer himself had lost his security clearance. From then on, he and his colleagues would largely be voices on the outside.

Amodei, by contrast, speaks as the CEO of Anthropic, the AI company that at the moment is perhaps doing more than any other to push AI to its limits. When he spins transformative visions of AI as potentially “a country of geniuses in a datacenter,” or runs through scenarios of catastrophe ranging from AI-created bioweapons to technologically enabled mass unemployment and wealth concentration, he is speaking from within the temple of power.

It’s almost as if the strategists setting nuclear war plans were also fiddling with the hands on the Doomsday Clock. (I say “almost” because of a key distinction — while nuclear weapons promised only destruction, AI promises great benefits and terrible risks alike. Which is perhaps why you need 19,000 words to work out your thoughts about it.)

All of which leaves the question of whether the fact that Amodei has such power to influence the direction of AI gives his warnings more credibility than those on the outside, like the Bulletin scientists — or less.

What time is it?

The Bulletin’s model has integrity to spare, but increasingly limited relevance, especially to AI. The atomic scientists lost control of nuclear weapons the moment they worked. Amodei hasn’t lost control of AI — his company’s release decisions still matter enormously. That makes the Bulletin’s outsider position less applicable. You can’t effectively warn about AI risks from a position of pure independence because the people with the best technical insight are largely inside the companies building it.

But Amodei’s model has its own problem: The conflict of interest is structural and inescapable.

Every warning he issues comes packaged with “but we should definitely keep building.” His essay explicitly argues that stopping or substantially slowing AI development is “fundamentally untenable” — that if Anthropic doesn’t build powerful AI, someone worse will. That may be true. It may even be the best argument for why safety-conscious companies should stay in the race. But it’s also, conveniently, the argument that lets him keep doing what he’s doing, with all the immense benefits that may bring.

This is the trap Amodei himself describes: “There is so much money to be made with AI — literally trillions of dollars per year — that even the simplest measures are finding it difficult to overcome the political economy inherent in AI.”

The Doomsday Clock was designed for a world where scientists could step outside the institutions that created existential threats and speak with independent authority. We may no longer live in that world. The question is what we build to replace it — and how much time we have left to do so.

Future Perfect
We’re asking the wrong question about the hantavirus outbreakWe’re asking the wrong question about the hantavirus outbreak
Future Perfect

The problem with hantavirus coverage isn’t the alarmism.

By Bryan Walsh
Future Perfect
“I’m disgusted to be a human”: What to do when you hate your own species“I’m disgusted to be a human”: What to do when you hate your own species
Future Perfect

Yes, it hurts to be human right now. That’s actually the assignment.

By Sigal Samuel
Future Perfect
The surprisingly strong case for feeling great about your coffee habitThe surprisingly strong case for feeling great about your coffee habit
Future Perfect

Your morning coffee is one of modern life’s underrated miracles.

By Bryan Walsh
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Future Perfect
The backlash to Billie Eilish’s vegan comments explains a lot about the American left (and everyone else)The backlash to Billie Eilish’s vegan comments explains a lot about the American left (and everyone else)
Future Perfect

Why are American leftists so reluctant to confront the meat industry?

By Kenny Torrella