Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Uber’s self-driving software detected the pedestrian in the fatal Arizona crash but did not react in time

The company’s internal investigation as well as the federal investigation are ongoing.

Screengrab from video taken inside Uber’s self-driving car
Screengrab from video taken inside Uber’s self-driving car

As part of its ongoing preliminary internal investigation, Uber has determined that its self-driving software did detect the pedestrian who was killed in a recent crash in Arizona but did not react immediately, according to The Information.

The software detected Elaine Herzberg, a 47-year-old woman who was hit by a semi-autonomous Volvo operated by Uber, as she was crossing the street but decided not to stop right away. That’s in part because the technology was adjusted to be less reactive or slower to react to objects in its path that may be “false positives” — such as a plastic bag.

Both Uber and the National Transportation Safety Board launched investigations into the crash to determine whether the software was at fault. Both investigations are ongoing. But people who were briefed on some of the findings of the investigation told The Information that the software may have been the likely cause of the crash.

Self-driving companies are able to tune their technologies to be more or less cautious when it is maneuvering around obstacles on public roads. Typically when the tech — like the computer vision software that is detecting and understanding what objects are — is less sophisticated, companies will make it so the vehicle is overly cautious.

Those rides can be clumsy and filled with hard brakes as the car stops for everything that may be in its path. According to The Information, Uber decided to adjust the system so it didn’t stop for potential false positives but because of that was unable to react immediately to Herzberg in spite of detecting her in its path.

Have more information or any tips? Johana Bhuiyan is the senior transportation editor at Recode and can be reached at johana@recode.net or on Signal, Confide, WeChat or Telegram at 516-233-8877. You can also find her on Twitter at @JmBooyah.

Uber has halted all its self-driving tests on public roads and has hired the former chair of the NTSB, Christopher Hart, to help asses the safety protocols of its self-driving technology.

The company also said it was unable to comment on the investigation as it’s against NTSB policy to reveal any information unless its has been vetted by the agency.

“We have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture,” an Uber spokesperson said in a statement. “Our review is looking at everything from the safety of our system to our training processes for vehicle operators, and we hope to have more to say soon.”

Herzberg’s death has ushered in an important debate about Uber’s safety protocols as well as a broader debate about the safety of testing semi-autonomous technology on public roads. For example, companies typically have two safety drivers — people trained to take back control of the car — until they are completely confident in the capability of the tech. However, Uber only had one vehicle operator.

That’s in spite of the self-driving technology’s slow progress relative to that of other companies, like Waymo. As of February 2017, the company’s vehicle operators had to take back control of the car an average of once every mile, Recode first reported. As of March of 2018, the company was still struggling to meet its goal of driving an average of 13 miles without a driver having to take back control, according to the New York Times.

Alphabet’s self-driving company, Waymo, had a rate of 5,600 miles per intervention in California. (At the time, Uber pointed out this is not the only metric by which to measure self-driving progress.)

But even with multiple vehicle operators, it’s unclear how dependable humans can be as a backup to a technology that is not yet fully developed. As CityLab previously reported, some Uber safety drivers shared those concerns.

This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh