Wharton management professor John Paul MacDuffie describes the state of play -- and the future -- of the self-driving car industry.

Many companies are trying to crack the code for self-driving cars, which could one day help reduce deaths from traffic accidents. But when? In this wide-ranging interview, Wharton management professor John Paul MacDuffie looks at the major issues. He notes that despite the hype suggesting that autonomous vehicles will arrive within a couple of years, full autonomy for all vehicles is many decades away. “In the next five years, there will be lots of pilot projects and testing, so companies can learn from real-world data and the public can learn about the technology. By 2030, autonomous vehicles will be common in some settings and for some uses. But the roads will still be a complex mix of human-driven and algorithm-driven vehicles.” MacDuffie, who is also director of the Program on Vehicle and Mobility Innovation at Wharton’s Mack Institute for Innovative Management, added: “Throughout, diffusion will be erratic — moving fast at times, slowed up by unexpected constraints at other times. But we’ll feel like [autonomous vehicles] are part of our lives, at least partially, within the next five to 10 years.”

An edited transcript of the conversation follows.

Knowledge at Wharton: Everyone agrees there are too many traffic deaths annually, but now we’ve seen some fatalities with some self-driving cars. As you put it in your paper for the Penn Wharton Public Policy Initiative, “The Policy Trajectories of Autonomous Vehicles“: “What risks are we willing to accept to advance technology?”

John Paul MacDuffie: I always start with one fundamental fact about the automobile and the automotive industry, which I think affects a huge number of things about how that industry is organized and how it is part of our lives. Automobiles are big, heavy, fast-moving, dangerous objects that operate entirely in public space.

There’s hardly another product that’s not also a transportation product that fits into that category. And so society, in the form of laws and regulations, has wanted to ensure the safety of the automobile pretty much ever since the industry was organized. And when you look around the world, as GDP rises to a level where people start to buy cars, pretty soon after that all these countries put in similar kinds of laws — at least about safety, not always about emissions and things like that.

So when an iPod or an iPad or an iPhone malfunctions, it’s not dangerous, typically, and it affects the private user — but not the public. Cars are fundamentally different. So a role for society in regulating these vehicles is appropriate. It continues what we’ve always done.

The promise of this technology is huge. In the U.S. in 2016, there were close to 38,000 deaths from automobile accidents. If you look at the post-World War II trend on deaths, [the number of fatalities] fell pretty steadily with the bigger improvements, when seatbelts came in, when airbags came in, etc. The years 2014 to 2016 saw a 5% jump, which is really — when you look at the data — a surprising jump up after a long tapering down.

“Even though the auto companies have been working hard to make sure you don’t actually look at your phone screen … people are clearly doing it.”

Pretty much everybody agrees it’s because of distracted driving — people on their phones. And even though the auto companies have been working hard to make sure you don’t actually look at your phone screen — you’ve got bluetooth and other kinds of things — people are clearly doing it. We need to help this technology advance because the public health consequences alone are huge. In the process, we want to be careful during the testing phase to make sure we’re not introducing new, unreasonable dangers in the process.

Knowledge at Wharton: Tell us about the five levels of self-driving car autonomy.

MacDuffie: These levels of autonomy come from the Society of Automotive Engineers (SAE). It was first a U.S. organization, and now it’s international. They’ve tried to get out in front of having some definitions, and I would say that regulators — [for example] the NHTSA (National Highway Traffic Safety Administration), the U.S. agency that governs car safety — use the SAE International categories. The auto companies use them. You know, whether people always use the terms precisely, or whether we can even precisely define each level, it’s a moving game with a lot of new technology. But I can certainly set out the parameters.

So let’s start off with Level 0, being no automation at all. Level 1 automation is something like simple cruise control that most of us have seen in cars for a very long time, so it has to be set by a human and has to be monitored closely. You know, put your foot on the brake, and the cruise control stops — very basic technology. And so if you have somebody saying, “Almost no cars are automated,” then you’d have to say, “Well, if you’re talking Level 1, then a lot of cars are.”

Level 2 is more advanced features of what is sometimes called “automated driver assist systems.” ADAS is an acronym some people may have seen. So many of the features in new cars now — whether it’s a beep if you drift in terms of the lane, or something that affects the follow-distance between you and the car in front. There are even some technologies that may soon be mandated — at least in the US — for braking when somebody jams on the brakes in front of you. That’s Level 2. So that’s kind of advanced. Still, the human driver is fully responsible but is getting help from these kinds of things.

Let me jump to the other extreme. Level 5 is probably the most utopian, and it’s sort of there to define the imaginable future, but the “we-can’t-say-when” future. And that’s each vehicle is completely self-driving in all situations, at all times. There’s probably not a single control in the vehicle where a human could do anything at all. So whether you’re approaching the Arctic Circle or you’re driving through a field to get to a family picnic, and there are no maps and there are no landmarks, it would be able to handle all those situations.

In-between are Level 3 and Level 4. This is a really interesting debate. For Level 3 the idea is the automated system does most of the work, but the human driver has to be ready at any time to jump in and take control. So it’s saying, “Some things are really tough. We want the human driver to make the judgment call, but we can handle most everything else.”

Well, the Uber crash was absolutely a Level 3 situation, and then the Tesla fatal accident that followed, which was an Apple engineer, actually — same thing. So you’ve got a human who supposedly knows that he’s in charge — or she’s in charge — and then fails to act.

In both fatal crashes, the automated systems also failed to act. There was no evidence of brakes being applied by either the human or the vehicle. The kinds of technologies that the believers in Level 3 would have us focus on are things that may track eye motion. And so if it looks like you’re drifting asleep or you’re getting distracted, it would beep — or in an extreme version — even force you to take control again. You can imagine flashing lights. You can imagine beeps. You can imagine vibrations in the seat. All of these are the kinds of things that the technologists want to use to grab your attention and bring it back.

In an interview I had with someone who runs the Stanford lab and who works on these things, he said, basically, “What if you’re deep in thought about something?” He gave an example that he’d been helping his son with some book about building medieval cathedrals. And so your mind is not just distracted from driving. Your mind is in 14th-century France. And all of a sudden this beeping happens, and you have to snap back to attention, comprehend the situation, and do the smart, proper driving thing. And by the way, if we’re doing less actual driving, our driving skills are probably deteriorating. What are the odds that you’re going to make the right choices very often?

“You’ve got a human who supposedly knows that he’s in charge … and then fails to act.”

And that’s what the Level 3 skeptics say. Google, now known as Waymo for their self-driving subsidiary, has gone as far as to say they believe Level 3 is an infeasible engineering solution. So they’ve got smart engineers that are saying, “Impossible. Can’t be done,” because of the limitations of our attention and consciousness to be triggered. Audi, on the other hand, has just released the A8, which is their most advanced sedan. They’re announcing a Level 3 option. They’re calling it “the first Level 3 vehicle.” They can’t introduce that option because at the moment, there’s not a single place in the world where it’s legal. They first said they were going to roll it out in Australia. Out in the wilds of Australia, where there’s hardly any people — they thought they could get away with it. But Australia hasn’t approved it yet, either.

That shows there are some strategies premised on Level 3 [that are] impossible. And there are others — and I think it’s mostly the car companies with confident engineers — that are saying, “No. We did Level 1. We did Level 2. We’re going to be able to do Level 3. And eventually we’ll get to 4 and 5.” We’ll have to see. But when these accidents happen under Level 3 conditions, it certainly gives you pause.

Knowledge at Wharton: How should we protect the public?

MacDuffie: At first it was states taking action to decide whether or not to regulate these tests. Certain states approved these tests. Certain states turned them down — at least at the state legislative level. Apparently the general presumption with new technology is that if it’s not specifically prohibited, it’s kind of allowed until somebody decides it’s enough of a problem to try to ban it.

I don’t think any of the early testing that happened without laws was very risky from a legal point of view. But now, of course, it’s a lot further along. The current federal legislation — there was a House bill and a Senate bill. The House bill passed unanimously, really fast, with hardly any debate. And now that bill is kind of stuck in the Senate. But what the House bill basically said is, “We think this is so important that we want to encourage testing as much as possible, so we’re going to prohibit the states from setting their own rules. We’re going to have a preemptive federal law.”

And then in terms of the federal law, they basically said, “We’re not going to have any rules.” In fact, the FMVSS — the Federal Motor Vehicle Safety Standards, which every single car on the road is required to meet — basically said, “You companies doing these tests, you’re exempted from those. You can exempt 25,000 test vehicles a year, and then after three years, that number rises to 100,000.” At the moment, there are 24 companies that have registered in the State of California to do testing. So 24 times 100,000 is a decent number of vehicles to be entirely exempted from FMVSS.

So that’s where there’s some pushback. Have we gone too far — is it too laissez-faire to say you don’t need to have any controls at all? I don’t think these companies would deliberately do things that were highly unsafe. What somebody from a company said to me is, “You have to meet laws for how you fasten car seats in the back seat of a regular vehicle. We’re not going to be putting car seats into these test vehicles any time soon, and so it makes sense that we shouldn’t have to worry about that when we’re doing our test vehicles.”

You get a sense of the debate over that. I think that because it’s stuck in the Senate, it’s stuck a little bit over whether they gave the companies too much leeway in terms of safety. My guess is whatever eventual bill gets passed and signed will have more safety standards, and they’ll be consistent across the U.S. And for any company that’s trying to figure out what to do with the new technology — they want to roll it out different places — a consistent standard across all states makes a lot of sense.

My colleague Sarah Light, [a Wharton professor of legal studies and business ethics], has written a lot about the advantages of federalism, or the principle that having the states free to experiment with different kinds of policies is a healthy thing for the way the U.S. operates and learns effective policy making. And I tend to agree with her. I don’t know if safety is the best place for that, but there may be room for states and cities, hopefully, to be encouraged to allow a lot of different kinds of tests. So what safety equipment is in the vehicle? That’s the first thing. And then there are also the conditions under which you allow the testing.

People will start to hear this term “geofencing,” which means you put some kind of barrier that fences off the autonomous vehicles from other vehicles. You could imagine sections of interstate highway, where they would allow self-driving trucks or other kinds of new trucking technologies to be applied, and that would end and require human drivers to resume control. You could imagine cities that might dedicate certain roadways to allowing these tests.

I heard an interesting idea in Japan from a colleague, where they have a lot of elderly people living in remote rural villages who have real problems with transportation. They rely on a bus that comes a couple of times a day, and that’s the only way they can get places for health care appointments or other things. There are some proposals to dedicate certain roads, certain back country roads to only having these self-driving vehicles that could be used exclusively for bringing people in emergencies or other routine situations to get care.

So geofencing is another way to deal with the safety challenge. And I’m guessing that letting states and cities come up with different forms of that is great, because that’s how we’ll learn and advance both the technology and the policy.

Knowledge at Wharton: When it comes to safety, one idea is that all cars have to communicate with each other. The other camp uses simpler technology.

MacDuffie: People in transportation or transportation planning throw out concepts like “intelligent highways,” “intelligent transportation systems.” The idea of making infrastructure smart, having vehicles communicate with each other — all to facilitate maybe self-driving, maybe better ways of handling congestion, cars being able to travel closer together. Those ideas have been around for a long time.

“Have we gone too far — is it too laissez-faire to say you don’t need to have any controls at all?”

There have been various small demonstration projects, but it has never really taken off. There are two big barriers to that approach taking off. When you want vehicles to communicate with each other, first of all you’ve got to get that technology into all the vehicles if you want it to be effective. So you can start installing it in new vehicles, but you’ve got a massive number of [older] vehicles on the road.

The other is you have to agree to a standard. Standard-setting has been a complicated process, whether it’s in telecom or whether it’s in the computer world. And it’s no less complicated — maybe more complicated — in the world of cars. So at the moment there is a short-range WiFi standard called DSRC, which is “Dedicated Short-range Communications.” The federal government thinks that this technology is ready and could be installed in vehicles, and they could start having at least some ability to communicate with each other.

But most folks I talked to in the auto industry say that technology is not even close to adequate to handle the amount of data that’s actually going to be needed for safety — but there are other things. We’re going to have internet connections to our cars. We’re going to have services and various things that we’re consulting. The auto companies — and the tech companies — are [talking about] 5G, the next standard up in the telecom world. It isn’t out yet. We don’t know when it will be out. And dedicating some of the 5G spectrum to automotive purposes is something that people say can be done. But that has to be worked out, too. So there are a lot of problems of interoperability, installing the technology, etc., for the vehicle-to-vehicle piece.

Now what about smart infrastructure? Why aren’t we putting wires in bridges, putting sensors in the roadways — all of that to let us detect decay in roadways, but also congestion and other kinds of problems?

We are way behind in the U.S. in investing in basic infrastructure, as everybody knows. So whether that’s fixing potholes or repairing highway roadways or crumbling bridges, the money for infrastructure has always come from governments. Government budgets have been strapped. To do the kind of massive investment you’d need to have smart infrastructure everywhere you’d want to use it  — I don’t know anybody who sees it as politically feasible in the U.S. or anywhere else.

With promising technologies, and some future of self-driving cars, we get stuck on these barriers — you have DARPA (Defense Advanced Research Projects Agency), which is the part of the Defense Department that experiments with new technologies. They hold these challenges for self-driving cars to compete. University teams enter and work with car companies, with tech companies. I think a team from Carnegie Mellon working with GM and Google won the first one.

But a kind of breakthrough … was to say, “Hey, let’s put $300,000 or $500,000 worth of cool hardware on a car. Let’s add some really smart algorithm software.” By the way, this happened in deserts and things like that, where there were no pedestrians. “And let’s add 3D mapping,” so a really very fine-grained mapping of the sort that Google was already in a position to do — “and we don’t have to communicate with anything. We don’t have to communicate with the other car. We don’t have to communicate with the surroundings. We can really be fully autonomous, like each vehicle is its own completely capable cell, taking in the environment for driving purposes.” And on that basis, all the advances we’ve seen in the last five years have happened. It has all been that kind — each vehicle is self sufficient.

There are some people at Penn Engineering whom I really trust on this topic. They say, “Look, we’re going to move very quickly for quite a while in being able to automate driving situations with this approach — each vehicle being completely self sufficient. Then we’re going to get to the really hard stuff. And when we get to the really hard stuff, we’re going to be stuck, because then we’ll have to do vehicle-to-vehicle or vehicle-to-infrastructure. And if we haven’t made any progress on either on those, it’s going to be kind of like the brakes get slammed on the whole thing until we solve that.” So that’s pretty much where I am on the issue, too.

Knowledge at Wharton: How might the infrastructure issue get solved?

MacDuffie: It depends a little on the pace of diffusion you either expect or think is wise. [Think] of a fairly long period of progressively ambitious experiments that are done in different parts of the country. The Phoenix area has been a place where Waymo, but also Uber, have done a lot of their tests. The city government and the state government said, “Come down, and we’ll approve it.” But you’ve also got flat suburban streets there.

“Geofencing is another way to deal with the safety challenge. And I’m guessing that letting states and cities come up with different forms of that is great.”

Knowledge at Wharton: Phoenix is dry.

MacDuffie: Wide, dry, never foggy, it hardly ever rains — and those are just really easy driving conditions to run these kinds of algorithms through their paces.

In Ann Arbor at the University of Michigan, they had a space they took over — a former industrial space — where they put in a driving course where a lot of things can be tested. And then the state of Michigan, along with GM and I think Ford, [are developing] a former assembly plant site into an even bigger and more sophisticated testing ground. In Michigan, they’ll be able to do winter weather testing, rain, fog — all those kinds of things. And it’ll be on a real test course that’s off the roads and safe.

So imagine — multiply that by a thousand — different experiments going on all over the U.S., some in cities, some in towns. And the infrastructure piece of any of those might be rather small, and it might not be that hard to persuade a city or a state to do it for that purpose. And maybe from that we learn what’s most effective, and then we know what the funding challenge might be to build it up from there.

Knowledge at Wharton: What else do we need to know?

MacDuffie: One thing I talk about in the policy brief is the insurance challenge. Insurance is now based on figuring out the profile of the driver and underwriting the characteristics of the driver. If there’s no driver, at the moment we don’t have an insurance model. I’ve had the chance through some executive education teaching here to meet with insurance company execs and consultants who work with them. And either they’re complacent because they say, “This will happen after I’m retired,” or they’re scared, because this completely undermines their business model. But if you dig down, they would say a couple of things.

First they would say, “Look, we can figure out new underwriting models. Maybe we underwrite a trip, so it’s a particular vehicle going a particular route at a particular time of day. Maybe it’s the Waymo operating system versus the — who knows – Apple operating system? And those have different characteristics. Each trip is micro-insured, and your insurance bill is an aggregation of all those things.” They’re pretty sure they could figure out how to do that. What they say is, “We need the data from that trip.” And at the moment, everybody wants the data. The data are incredibly valuable for all kinds of purposes, and there’s no regulatory framework for that.

If they had to buy it, it would make their products impossibly expensive. Insurance is in the public interest, so I see a possible scenario in which the government says, “Out of the data generated by these self-driving cars, certain generic trip characteristic stuff has to be shared for free with insurance companies.” That to me would be a logical kind of extension of public policy to make sure we have a functioning insurance system. I don’t think it would unfairly advantage the insurance companies. I don’t think it would unfairly penalize them. So that’s a really interesting issue.

Control of the data in general is going to be a really interesting issue. Every service wants to access the customers in cars who now have a lot of time on their hands — right, if they’re not driving? They want the data. Who is actually going to be able to monetize the data?

The tech companies have proven that they are really quite good at that, and I think they’re trying to put themselves in a controlling position. The car companies have never been very good at that kind of thing, but they’re terrified about having the tech companies control it.

“To do the kind of massive investment you’d need to have smart infrastructure everywhere you’d want to use it — I don’t know anybody who sees it as politically feasible in the U.S. or anywhere else.”

When you hear about things breaking down — Ford and Google tried to work out a deal, and it fell apart. Apple and BMW, and Apple and Mercedes-Benz tried to work out some deals. [The talks] broke down. I haven’t seen the details, but it’s probably over the data as much as anything. Who is really going to control what happens with the data and therefore the sort of revenues, margins, profits that come from that — knowledge of customers — etc.?

Knowledge at Wharton: Who would run a data clearinghouse — a non-profit? The government? That would allow safety advancements to progress more quickly. So I’ve read, for example, that China is in a position to pull ahead on this because it has a lot of people, so they’re going to have a lot of data. But also, the government can say, “Okay, everyone has to share their data” and they will control it and thus possibly make advances more quickly.

MacDuffie: Yes, absolutely. Probably the first issue that’s getting a lot of attention about China — because it probably will happen sooner — is electric vehicles. So again, with the government pushing its own domestic makers to get into electric vehicles, mandating that foreign automakers make electric vehicles, and maybe most importantly, being willing to forward-invest into charging infrastructure — which again, we’ve had this chicken-and-egg problem with charging infrastructure everywhere. And if the central government in China says, “Hey, we’re going to do it,” then you induce demand by having the charging infrastructure available. But some of those same attributes could really help China in this area, as well. And they’ve certainly identified this as a place for them to zoom ahead and be known as an innovation leader.

There’s a company called Baidu, which is mostly a tech company, and they’ve introduced an open-source software called Apollo for self-driving. And they basically say, “Look, anybody can download this software and use it as the starting kernel for your own software. You don’t have to share the new software you write, but we do ask you to give us any data from driving tests of what software you use, and then we’re going to make that a resource for everybody who’s participating in the open source project, so everybody can see those data.”

So it’s a smart idea to wrap themselves in open source, which is a beloved Silicon Valley and a high-tech kind of concept that seems to be noble and about making the world better and not about profits. And it’s also smart to suggest that there are ways for smaller players to get access to a big pool of data for using machine learning to improve their algorithms and all sorts of other stuff — rather than have this be completely dominated by the big guys.

So we’ll see. One of the things that I’ve been doing with Good Judgment, a forecasting organization that Wharton management professor Philip Tetlock helped set up. We’ve been doing some technology forecasting on things affecting mobility, and we had forecasters predicting how many downloads of this kernel stuff from Baidu would be used.

Knowledge at Wharton:  What will you look at next?

MacDuffie: I think seeing what the regulatory response is to some of these recent fatal accidents — and maybe more importantly, what’s really discovered about the reasons for them — is pretty important.

“The car companies have never been very good at that kind of thing, but they’re terrified about having the tech companies control it.”

Take the Uber accident which we started talking about. You’ve got a pedestrian emerging from the shadows at dusk in a place where pedestrians should not be — with a bicycle. And the Uber vehicle didn’t — the driver and the algorithms didn’t spot it. The driver, we now know from phone records, was watching an episode of “The Voice” on her phone and looking down quite a lot at it. So the driver was distracted.

Uber, though, had also decided to turn off some of the safety systems in the Volvo car — some automatic braking things, because they wanted to test their own braking algorithms. That suggests maybe not a wise kind of compromise. Also Uber only had one LIDAR unit, which is that laser-based, very valuable technology. They had one on the top. Most Waymo cars have six. They have the sides, the front and the back, as well as the top. So should the Uber cars have been allowed to be out there with only one LIDAR unit? Tesla is saying, “We don’t want to use LIDAR at all. It’s too expensive.”

So I think the fatal accidents get the most attention. The state of California now requires two things. Any accident has to be reported out of the tests. And any time that a human test driver has to retake control from the algorithms, that has to be reported, as well.

These, I think, are smart things that regulators are doing to help us learn about this process of testing. I don’t think it’s that hard for the companies to do it. If they think that government shouldn’t be involved at all, they’re probably grumbling about it, but I think it’s a smart way for us to get more data and learn what’s going on.