Debate on the safety of autonomous vehicles was heightened after a Tesla car in self-driving mode failed to detect a tractor trailer and crashed into it in Williston, Fla., in May, killing its driver. While self-driving technology could potentially reduce human fatalities, regulatory consistency across U.S. states will boost innovation and thereby safety, said Wharton management professor John Paul MacDuffie. He is also director of Wharton’s Program on Vehicle and Mobility Innovation at the School’s Mack Institute for Innovation Management.
MacDuffie discussed the emerging scenario for driverless cars on the Knowledge at Wharton show on Wharton Business Radio on SiriusXM channel 111. (Listen to the podcast at the top of this page.)
Here are five key takeaways from MacDuffie on the debate on driverless cars (jump to the corresponding spot in the podcast using the time codes provided):
- A rash of questions: Was Tesla Too Hasty? “Tesla, as they often have, pushed the envelope a bit in claiming that autopilot was ready for use by drivers,” said MacDuffie, while noting that the company also issued cautions about how that system was intended to be used. “Would one or more fatal accidents be … a Hindenburg Moment?” he wondered, referring to the 1937 crash of the Hindenburg airship in Manchester, N.J., which ended the prospects of the Zeppelin type of aircraft (04:15). Can these new technologies could do better than human drivers? “There are a lot of reasons to think ‘yes,’” he said (04:45). Another question is about “the public and regulatory tolerance for deaths” as this technology advances.
- Adopting the Silicon Valley Approach: In Silicon Valley, “the way software gets better is you test it, find bugs and you fix the bugs,” said MacDuffie, advocating that approach for driverless technologies (06:18). “You are going to have bigger failings and smaller failings, but finding them is the key, so you can fix them.”
- The need for regulatory consistency: Car makers want consistency across states in the regulation of driverless technologies, and have requested federal guidelines to make it easier for them to develop it, noted MacDuffie (08:30). That will build on the Silicon Valley approach to testing and fixing bugs, he added. “When Detroit meets Silicon Valley, will we get better and faster movement towards the open standards that allow innovation [and] yet support regulatory goals and rapid diffusion and testing of this technology?” (18:10)
“Would one or more fatal accidents be … a Hindenburg Moment?” –John Paul MacDuffie
- Which automation level should we aim for? MacDuffie raised “a broader, strategic issue” on the debate about whether driverless cars ought to skip the phase that requires drivers to stay alert and move to a truly autonomous level (09:50). He referred to a five-level framework adopted by the U.S. National Highway Traffic Safety Administration (NHTSA). In Level 3 of that system, drivers let their vehicles take control but must be ready to jump in at any time and take control. In Level 4, the vehicle is “truly autonomous,” while Level 5 has to do with smart infrastructure that can communicate with vehicles, he added. Levels 1 and 2 deal with features like cruise control and alerts when cars stray off lanes. “There is a big debate out there about whether it is a good idea to aim for Level 3 at all, or to skip over it and go to Level 4,” he noted.
- Will automakers and regulators work together? MacDuffie highlighted “the usual dance – or standoffs” — between the auto industry and regulators, noting that the former has resisted safety technology over the years, from seatbelts and airbags to compliance with the Clean Air Act. “[Will] there be a bit more confluence of interest [in driverless technologies] between car companies and regulators?” (17:05) He hoped for that, given the “overwhelming public safety benefits” from such technology, and technology development for interoperability between different cars operating on the same software.