Autonomous Car Crashes: Who — or What — Is to Blame?

driverless-car-crash

mic Listen to the podcast:

Wharton's John Paul MacDuffie and Carnegie Mellon's Constantine Samaras discuss the road ahead for autonomous vehicles following recent accidents.

The promise of driverless vehicle technology to reduce road fatalities hangs in the balance now as never before. Two recent deaths involving Uber and Tesla vehicles using driverless systems have raised the debate on safety to levels that threaten to significantly delay or derail adoption of the technology.  

Uber has temporarily halted tests of self-driving cars after the latest crash, and so have Toyota and graphic chips manufacturer Nvidia, whose artificial intelligence technology helps power driverless cars. Arizona, where the Uber vehicle had its crash, has banned the company from testing its driverless cars in the state. And even before the latest crashes, California had introduced a permit process for autonomous vehicles with elaborate requirements.

The publicly available information on the two accidents does not appear to predominantly place the blame on either human error or technology. On the night of March 18, as Elaine Herzberg crossed a six-lane freeway in Tempe, Arizona, pushing a bicycle, she was fatally struck by a Volvo SUV that had been modified to use driverless technology.  In what is believed to be the first pedestrian fatality involving autonomous vehicle technology, the sensors in the SUV failed to spot Herzberg in good time and slow down its 38 mph speed, and a safety driver in the car was apparently distracted, as police videos showed.  

Five days later, a Tesla SUV with driverless technology on autopilot mode crashed into a road divider in Mountain View, Calif., killing its driver, Apple engineer Walter Huang. Incidentally, Huang had earlier complained to a Tesla dealership about how the vehicle in Autopilot mode veered towards the same barrier on multiple occasions, according to an ABC report. Tesla blamed the severity of the crash on a missing piece in the road divider. In a later update, Tesla seemed to blame human error, as well.

Wharton management professor John Paul MacDuffie, who is also director of the Program on Vehicle and Mobility Innovation at the school’s Mack Institute for Innovation Management, put the accidents in the context of the evolution curve of driverless technology. “We’re early days yet, and there have been very few of these accidents,” he said. “[The Uber crash] may have been only the second fatal accident and the first of a pedestrian [involving driverless vehicles].”  

MacDuffie observed that while each such death is shocking and tragic, such incidents will become more common. “We’ve all been telling ourselves that inevitably in the testing and improvement of autonomous vehicles there are going to be people injured and killed, and we know that human drivers kill other humans all the time.”

“We’ve all been telling ourselves that inevitably in the testing and improvement of autonomous vehicles there are going to be people injured and killed, and we know that human drivers kill other humans all the time.” –John Paul MacDuffie

What Went Wrong?

“It seems like everything that could go wrong went wrong” in the Uber case, said Constantine (Costa) Samaras, assistant professor at Carnegie Mellon University and director of the university’s Center for Engineering and Resilience for Climate Adaptation. He noted that the sensors on the vehicle should have seen the pedestrian; the backup or safety driver did not have his hands on the wheel and that no brakes were applied by either the driver or the car. Both he and MacDuffie said the final government investigation report on the accident will clarify what precisely caused it.  

In the Tesla crash, the autopilot had been engaged and it gave warnings to the driver of a potential collision, but the driver failed to take control of the vehicle. Samaras said that drivers that use a robotic system or artificial intelligence to improve their driving may be able to prevent crashes. “But when humans are the backup systems, we’re pretty bad at doing that,” he said. “This is a challenge for this transition to automation, where there’s this muddled mixture of human responsibility and robot responsibility.”

MacDuffie and Samaras charted the path ahead for the development of autonomous vehicle technology on the Knowledge@Wharton show on SiriusXM channel 111. (Listen to the full podcast using the player at the top of this page.)

A Human Problem

Distracted driving is already showing up tellingly in the statistics. MacDuffie pointed out that deaths from vehicle accidents had consistently decreased from the post-war period until recently, but began increasing after 2015. U.S. roads saw 37,461 fatalities in 2016, up 5.6% from those in 2015, according to a report from the National Highway Traffic Safety Administration. He said indications are that the number of such fatalities would have further increased in 2017, data for which is not yet available.  

“This is a challenge for this transition to automation, where there’s this muddled mixture of human responsibility and robot responsibility.” –Constantine Samaras

Added Samaras: “The more than 37,000 road fatalities last year would be the same if a fully loaded 747 plane [were to] crash every couple of days.”

Automated driving technology has been expected to help reduce the incidence of those accidents, MacDuffie said. However, “any situation where you’re expecting the human and the computer algorithms to share control of the car, it is very tricky to hand that control back and forth.” He noted that Waymo, the Alphabet subsidiary pursuing driverless technology, has consistently argued against such systems where control of a vehicle is handed back and forth between the driver and the algorithms. The company has instead pushed for a perfected automation technology that totally eliminates the role of a human driver.  

Speeding Past Regulation

MacDuffie wondered if in the case of Uber, a flawed corporate culture was responsible in some part. “It probably could have happened to anyone, but some of the ways Uber has approached this fits other parts of their narrative recently, probably unfortunately for them,” he said. He noted that Uber began testing its driverless vehicles in San Francisco without the requisite permits in December 2016, before California halted them a week later. Uber then took its trials to next-door Arizona, which promised less regulation and a more business-friendly environment, he added.  

MacDuffie pointed to other changes Uber made that he found disconcerting: in tests, it replaced the practice of having two drivers with one driver; it turned off the safety equipment in the cars while testing software; and had the LiDAR (Light Detection and Ranging) sensors installed only on the top of the vehicle and not on the sides as well, unlike other vehicle manufacturers. Even as details will emerge only after the full investigation report, “there are some aspects of the story that make it look like Uber rushing in and cutting corners may have been part of why they had a failure in this particular incident,” he added.  

Being Compassionate and Responsible

According to Samaras, Uber has taken the right step in suspending tests with driverless cars until more information is available. “This is a business risk to both Uber and Tesla,” he said. “If they are seen to be not compassionate but also not responsible in making sure that these risks are reduced, the future of their business in automation is in question.”  

Not surprisingly, the Uber and Tesla crashes presented an opportunity for rivals to promote themselves. MacDuffie noted that Waymo CEO John Krafcik claimed after the Uber crash that his company’s technology would have spotted the pedestrian and averted the accident. Similarly, the CEO of Mobileye, an Intel-owned company that makes sensors for autonomous vehicles, claimed in a blog that his company’s technology was superior.  

Waymo went further to demonstrate that it fully believed in the safety of its technology by announcing plans to order 20,000 electric Jaguars for its forthcoming launch of a robotaxi service in the U.S. “I’m sure there was a strategic calculus there of making sure people didn’t automatically think all automated vehicles were dangerous and to make a bold claim that ‘Ours is safer, and we’re moving ahead quickly,’” said MacDuffie.  

“Any situation where you’re expecting the human and the computer algorithms to share control of the car, it is very tricky to hand that control back and forth.” –John Paul MacDuffie

The Road Ahead

Samaras called road fatalities “a public health crisis” that automation could help address. “The challenge here is — how do we muddle through this transition period?” The latest incidents have made the road ahead for driverless vehicle technology far less clear. “It is important that these companies test in the real world as well as in simulation,” said Samaras. “We can’t just not test on the streets if we want to have this technology for the benefit of society. [However,] the challenge here is that you have PR, engineering, policy, regulation and risk – all kind of coming together on this [project] on public streets.” The solution lies in making the data on the accidents publicly available and to learn from them.  

MacDuffie agreed that it is important that the technology be tested in real-world conditions. “It’s no coincidence that all these companies are finding Arizona and the suburbs around Phoenix very good places to test because they are flat, nice wide roads, with simple intersections, dry weather, sunny weather, a little rain and a little fog,” he said. “Those are very good conditions. The tougher conditions that exist in other places will need to be tested out.”

Regulatory Moves

Well before the latest crashes, state regulators have been progressively tightening rules on testing of automated vehicle technologies. MacDuffie pointed out that California, which had been criticized for a relatively lax regulatory regime governing autonomous vehicles, now requires companies conducting the tests to report every time there is a disengagement of the automated controls for the driver to take over. Records of such disengagement incidents are made public, he noted.  

Amid the moves on the regulatory front, a bill stalled in Congress seems to exempt companies testing automated vehicles from federal vehicle safety standards, Samaras said. “The claim is that to outfit these vehicles with the proper safety features [such as] putting a car seat in the back might not be needed if it’s going to just be a test vehicle,” he added. “It’s a pretty laissez faire kind of bill and in the name of innovation it’s saying, ‘Let’s move forward quickly with this and let’s not slow it down.’ What happens in the regulatory discussion is worth watching.”  

Image: A Tempe, Ariz., police photo from an accident involving a self-driving Uber vehicle in March 2017.

Citing Knowledge@Wharton

Close


For Personal use:

Please use the following citations to quote for personal use:

MLA

"Autonomous Car Crashes: Who — or What — Is to Blame?." Knowledge@Wharton. The Wharton School, University of Pennsylvania, 06 April, 2018. Web. 20 April, 2018 <http://knowledge.wharton.upenn.edu/article/automated-car-accidents/>

APA

Autonomous Car Crashes: Who — or What — Is to Blame?. Knowledge@Wharton (2018, April 06). Retrieved from http://knowledge.wharton.upenn.edu/article/automated-car-accidents/

Chicago

"Autonomous Car Crashes: Who — or What — Is to Blame?" Knowledge@Wharton, April 06, 2018,
accessed April 20, 2018. http://knowledge.wharton.upenn.edu/article/automated-car-accidents/


For Educational/Business use:

Please contact us for repurposing articles, podcasts, or videos using our content licensing contact form.