Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Jason Matheny is helping humanity prepare for the existential threats of the future

From AI to bioengineered risks, Jason Matheny studies what governments will face in the coming years.

An illustration of Jason Matheny.
An illustration of Jason Matheny.
Rebecca Clarke for Vox

Jason Matheny is helping humanity prepare for the existential threats of the future

From AI to bioengineered risks, Jason Matheny studies what governments will face in the coming years.

Jason Matheny has been described as an “apocaloptimist” — which, according to Matheny, means he sees “that we’re on a really good trajectory, if we can just avoid any threats to our existence.” The blend of hope for a better future alongside an intense focus on potential threats and barriers to that future is a hallmark of his work for the last decade.

Matheny started out with the Future of Humanity Institute (FHI), a research center at Oxford University that studies existential threats to humanity, whether from artificial intelligence, bio-engineered pandemics, or more bizarre dangers. Studying the future, naturally, has made Matheny’s work ahead of the curve, and it has some serious staying power. Matheny’s 2007 paper on reducing the risk of human extinction, where he argues that investing in nearer-term problems like world hunger could indirectly reduce the risk of catastrophic global threats, is still being cited to this day.

Because of his unique understanding of existential risk, Matheny joined the US intelligence community to modernize its perspective on what risk could be. In 2009, Matheny left FHI for the Intelligence Advanced Research Projects Activity (IARPA), the US intelligence community’s version of DARPA. IARPA invests in a wide range of cutting-edge speculative research projects in areas like AI and synthetic biology, including a tournament on geopolitical forecasting for national intelligence, which Matheny helped run from 2010 to 2015.

In 2018, he moved to the National Security Commission on Artificial Intelligence, a US government agency that advises Congress on how AI impacts national security. Around the same time, he founded, with Georgetown University, the Center for Security and Emerging Technology (CSET), an organization also aimed at providing data-based recommendations to US policymakers on changes related to progress in artificial intelligence.

“There are a range of challenges related to AI, but national security is a critical area of focus,” Matheny said of his work at CSET, citing “cybersecurity, intelligence, and systems for analysis and collection, as well as AI that is embedded in weapon systems of competing nations” as particularly key issues.

In July, Matheny became CEO of the Rand Corporation, the venerable California-based policy think tank that funds research on technology, infrastructure, health care, energy, climate, and many other areas. He’s especially focused on preventing “truth decay” — the decreasing trust in facts and data within the American political debate — and how, across the board, this decay could hold back efforts to improve policy. He still prioritizes preventing technological catastrophe while remaining hopeful that technology can, if used cautiously, solve rather than cause more problems.

“We now have a moment where we need to think about what will define the next 75 years,” Matheny says. “If you could read a history book in the year 2098, what are going to be the key themes, the highlights?” He adds that he hopes the histories will include Rand “reducing the risk of human extinction by .00000001 percent or greater. Hopefully greater.”

Future Perfect
We’re asking the wrong question about the hantavirus outbreakWe’re asking the wrong question about the hantavirus outbreak
Future Perfect

The problem with hantavirus coverage isn’t the alarmism.

By Bryan Walsh
Future Perfect
“I’m disgusted to be a human”: What to do when you hate your own species“I’m disgusted to be a human”: What to do when you hate your own species
Future Perfect

Yes, it hurts to be human right now. That’s actually the assignment.

By Sigal Samuel
Future Perfect
The surprisingly strong case for feeling great about your coffee habitThe surprisingly strong case for feeling great about your coffee habit
Future Perfect

Your morning coffee is one of modern life’s underrated miracles.

By Bryan Walsh
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Future Perfect
The backlash to Billie Eilish’s vegan comments explains a lot about the American left (and everyone else)The backlash to Billie Eilish’s vegan comments explains a lot about the American left (and everyone else)
Future Perfect

Why are American leftists so reluctant to confront the meat industry?

By Kenny Torrella