Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

In self-driving-car crashes, most people think automakers should be liable

A survey of 50,000 people showed that more than half want steering wheels in their self-driving cars and 79 percent think the automakers should be liable if the cars crash.

Geneva Motor Show 2016
Geneva Motor Show 2016
Photo by Harold Cunningham/Getty Images

When it comes to self-driving cars, there’s still a lot that has yet to be determined — much of which concerns consumer trust and safety. From whether robot-driven cars need steering wheels to who is liable when the robot-driven car crashes, these unanswered questions are the center of ongoing debate in the transportation industry.

While the industry and regulators have yet to land on a definitive standard, the people have certainly spoken. In a survey of 50,000 people around the world conducted by Volvo, 79 percent of people said they thought carmakers should assume liability in the case of crashes and 55 percent of people said they wanted a steering wheel in their self-driving cars.

Screen_Shot_2016-06-29_at_4.55.07_PM.0.png

That’s good news for Volvo — and likely why the company highlighted these findings — because in 2015, the company was the first to make a pledge to assume liability for any and all self-driving accidents. Volvo’s U.S. CEO Lex Kerssemakers also told Recode the company is not just a proponent of rolling out semi-autonomous technology incrementally, but also thinks people should be able to switch between autonomous and manual driving. Both of these require steering wheels so the humans can take over.

There’s still debate over whether steering wheels and semi-autonomous technology are actually more dangerous in self-driving cars, given that it introduces the potential of human error into the equation.

This article originally appeared on Recode.net.

See More:

More in Technology

Future Perfect
The 5 most unhinged revelations from Elon Musk’s lawsuit against OpenAIThe 5 most unhinged revelations from Elon Musk’s lawsuit against OpenAI
Future Perfect

The Musk v. OpenAI trial is over. Here are the receipts.

By Sara Herschander
Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady