Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Ultron’s roots: we’ve been worried about robot uprisings for 200 years

Ultron, the robot villain in Avengers: Age of Ultron. Computers may someday realize how desirable it is to sound like James Spader.
Ultron, the robot villain in Avengers: Age of Ultron. Computers may someday realize how desirable it is to sound like James Spader.
Ultron, the robot villain in Avengers: Age of Ultron. Computers may someday realize how desirable it is to sound like James Spader.
Marvel

In theaters around the world, a robot named Ultron is trying to destroy humanity. He’s the star of Avengers: Age of Ultron and the apocalyptic embodiment of the singularity — the moment when artificial intelligence exceeds human intelligence.

Today’s Ultron is undoubtedly influenced by the intellectual outlook of people like Ray Kurzweil, the biggest living popularizer of the concept (which, in turn, came after a 1993 Vernor Vinge lecture that used the term). Kurzweil and many other contemporary philosophers probably led to the Ultron we see on screen.

But the evil robot appeared in comics long before that: in 1968, when computers were as large as a room. Ultron’s essence played off science fiction — and scientific concepts — proposed since the 18th century. People were already worried about AI let loose, thanks to the work of some trailblazing thinkers.

Paranoia about evil smart machines has been around for 200 years

This is an 1834 version of Babbage's Difference Engine, one of the first computers.

Way back in 1794, mathematician Nicolas de Condorcet wrote about how machines might exceed the progress of the human mind. But it was the scenario put forth in Samuel Butler’s 1872 novel Erewhon that might be the most influential.

Erewhon was adapted from an article, “Darwin Among the Machines,“ in which Butler espoused a standard singularity fear: that robots would take over:

Assume for the sake of argument that conscious beings have existed for some twenty million years: see what strides machines have made in the last thousand! May not the world last twenty million years longer? If so, what will they not in the end become? Is it not safer to nip the mischief in the bud and to forbid them further progress?

Butler lived in a post–Industrial Revolution era, when the prevalence of factories and railroads prompted a lot of examination of machines’ increasing influence. That made Erewhon a popular success and a template for how to think about the rise of the machines.

As computers advanced from simple adding machines to devices able to do more complex calculations, those fears about the singularity only worsened. Alan Turing himself later referenced Butler’s novel in his own singularity predictions, writing, “At some stage therefore we should have to expect the machines to take control in the way that is mentioned in Samuel Butler’s Erewhon.”

Alan Turing kicked off the 1950s and ‘60s fears about machines taking over

A young Alan Turing.

In 1951, Turing presented a paper titled “Intelligent Machinery: A Heretical Theory.“ Though Turing became famous for cracking codes in World War II, inventing early computers, and setting the groundwork for the Turing Test (which artificial intelligence can pass by successfully seeming to be human), he also introduced the concept of singularity to a large popular audience.

As Turing wrote in “Intelligent Machinery,” he imagined a theoretical computer that experienced an exponential increase in intelligence. Through learning by experience, the machine could go from simple to complex in a matter of moments. Turing wrote in the conclusion of his paper, “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers.”

Shortly after the paper’s publication, Turing spoke about it on the BBC network Third Programme, and that excited a broader contemporary debate about the singularity (though much of the public focused on the unique abilities of humans rather than the threat of an AI takeover).

That theme was later taken up by Turing’s colleague Irving (I. J.) Good, who worked with him as a cryptologist during World War II (decoding the famous Enigma machine). Good’s papers “Speculations Concerning the First Ultraintelligent Machine” and “Logic of Man and Machine,“ both published in 1965, furthered the conversation about the singularity and put a finer point on the potential threat of robots: smart machines could build smarter machines.

As Good wrote in “Speculations Concerning the First Ultraintelligent Machine”:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.

He then goes on to explain why computers will end up being better than humans at ... almost everything.

All that shaped 1968, when Ultron combined old and new fears about robots

Ultron's mask comes off in Avengers No. 55.

We know that these ideas quickly gained currency in the wider world, specifically in science fiction circles. Stanley Kubrick hired Goode as a consultant on 1968’s 2001: A Space Odyssey. That movie featured its own malevolent AI, HAL, which fit in a science fiction landscape in which robots were becoming increasingly popular. And that doubtless set the stage for a comic book robot, especially since 2001 came out three months before Ultron showed up.

Ultron was a mishmash of influences: pop psychology played a big part, as did the psychological issues of Ultron’s creator, Henry Pym, also known as Ant-Man (in the current movie, Ultron’s creator is Tony Stark). The writers have also said that Ultron’s physical appearance was inspired by a character called Mechano from Captain Video. But it was still an unmistakably paranoid vision of the singularity: in an early flashback, the robot exponentially progressed in intelligence, calling its creator “Da Da,” then “Dad,” and finally “Father” within a matter of seconds.

By using Henry Pym’s brain patterns as a starting point, Ultron followed the template of old stories in which inanimate objects might be possessed by human consciousness, like Pinocchio or Frankenstein. But Ultron also incorporated the modern fear in which machines developed their own consciousness: the singularity.

Ultron unplugs himself from the system.

The creation of Ultron was a collaborative project, and his success was collaborative, as well. It would be absurd to argue that both Ultron’s real-life creators and comic book readers had plowed through the academic papers of Irving Good or been schooled in the classic vision of Erewhon. Those influences, however, involved ideas that rippled through the public consciousness in many ways, from science fiction to politics. Ultron succeeded because the concepts resonated so clearly.

And because of the quickly evolving ideas about the singularity, the vision of Ultron that took over was the one in which he was a representation of the growing presence — and potential threat — of artificial intelligence. It’s that unique philosophical grounding that makes him resonant, and frightening, today.

Update: A reader notes that any history of robotic singularity paranoia would be incomplete without a mention of R.U.R. (Rossum’s Universal Robots). The 1920 play is generally credited with inventing the word “robot” and as an early portrayal of robots taking over.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh