Monday, December 5

Self-Driving Cars Are Not (Yet) Safe

Three things have happened in the last month that have made me think about the safety of self-driving cars a lot more. The US Department of Transportation (DOT) has issued its guidance on the safety of semi-autonomous and autonomous cars. At the same time, [Geohot]’s hacker self-driving car company bailed out of the business, citing regulatory hassles. And finally, Tesla’s Autopilot has killed its second passenger, this time in China.

At a time when [Elon Musk], [President Obama], and Google are all touting self-driving cars to be the solution to human error behind the wheel, it’s more than a little bold to be arguing the opposite case in public, but the numbers just don’t add up. Self-driving cars are probably not as safe as the average sober driver yet, but there just isn’t the required amount of data available to say this with much confidence. However, one certainly cannot say that they’re demonstrably safer. There’s simply not enough data to support that statistically.

Myth: Self-Driving Cars are Safer

tesla-autopilot-road-trip-nqnjro4fqnomp4-shot0001First, let’s get this out of the way: Tesla’s Autopilot is not meant to be a self-driving technology. It’s a “driver assist” function only, and the driver is intended to be in control of the car at all times, white-knuckling the steering wheel and continually second-guessing the machine despite its apparently flawless driving ability.

And that’s where it goes wrong. The human brain is pretty quick to draw conclusions, and very bad at estimating low-probability events. If you drove on Autopilot five hundred times over a year, and nothing bad happened, you’d be excused for thinking the system was safe. You’d get complacent and take your hands off the wheel to text someone. You’d post videos on YouTube.

Bad Statistics

Human instincts turn out to be very bad at statistics, especially the statistics of infrequent events. Fatal car crashes are remarkably infrequent occurrences, per-person or per-mile, and that’s a good thing. But in a world with seven billion people, infrequent events happen all the time. You can’t trust your instincts, so let’s do some math.

how-teslas-self-driving-autopilot-actually-works-57b22f49fd2e612427000025mp4-shot0009Tesla’s Autopilot, according to this Wired article, racked up 140 million miles this August. That seems like a lot. Indeed, when grilled about the fatality in June, [Musk] replied that the average US driver gets 95 million miles per fatality, and since the Tesla Autopilot had driven over 130 million miles at that time, it’s “morally reprehensible” to keep Autopilot off the streets. Let’s see.

First of all, drawing statistical conclusions on one event is a fool’s game. It just can’t be done. With one data point, you can just estimate an average, but you can’t estimate a standard deviation — the measure of certainty in average. So if you asked me a month ago how many miles, on average, a Tesla drives without killing someone, I’d say 130 million. If you then asked me how confident I was, I’d say “Not at all — could be zero miles, could be infinity — but that’s my best guess”.

But let’s take the numbers at one death per 130 million miles as of August. The US average is 1.08 fatalities per 100 million miles, or 92.6 million miles per. 95% of these deaths are attributed to driver error, and only 5% to vehicle failure. So far, this seems to be making [Elon]’s point: self-driving could be safer. But with only one data point, it’s impossible to have any confidence in this conclusion.

Drunk Drivers

drinking-while-drivingStill, I don’t think [Elon]’s benchmark is the right one. According to the National Institutes of Health, alcohol was involved 37% of traffic fatalities. Drunk drivers constitute a decent fraction of people killed in cars, and by comparing to the overall US average, the Tesla’s performance is being compared against a hypothetical average person who is drunk behind the wheel 37% of the time.

To get a grasp on what this means for the average number of miles per fatality for a sober driver, we need to know how much more likely a drunk driver is to get into an accident. According to this other NIH publication, a 160 lb driver with two beers is 1.4 times as likely to be in an accident (BAC 0.04%). After two more beers, at 0.08%, it’s illegal to drive in most states of the US, and you’re eleven times more likely to be in an accident. It increases geometrically from there: at 0.10%, you’re 48 times more likely to be in an accident, at 0.15%, it is 380 times.

Ten is a nice round number, and likely an underestimate, but the outcome isn’t all that sensitive to the exact value. If a drunk driver is ten times more likely to be in an accident, then they drive a tenth of the miles before having an accident:

95 \mbox{million miles} = 0.63 * \mbox{sober miles} + 0.37 * \mbox{drunk miles}
95 \mbox{million miles} = 0.63 * \mbox{sober miles} + 0.37 * \mbox{sober miles}/10
\mbox{sober miles} = 95 \mbox{million miles} / \left( 0.63 + 0.37 / 10 \right)

Solve this out, and you get 142.4 million miles for a sober driver. So even if the Tesla autopilot goes 130 million miles between fatalities, it is certainly comparable, but it’s not safer than a sober average American driver.

Human Variation

But humans aren’t just humans, either. The variation across the US states is dramatic. In Massachusetts, the average is 175 million miles per fatality, while they only get 61 million miles in South Carolina. Cultural differences with respect to the acceptability of drinking, percentage of highway drives, and distance to hospitals all play a role here.

In the Tesla’s home state of California, which is slightly better than average, they get 109 million miles before adjusting for drinking, and 163 million miles sober. Again, this is comparable with Tesla’s performance. But I’d rather it drive as safe as a Bostonian.

Selection Bias

There’s a good reason to believe that the drivers of Tesla’s Autopilot are picking opportune times to hand over control: straight highway with non-challenging visibility, or maybe slow stop-and-go traffic. I don’t think that people are going hands-off, fast, in the pouring rain. Yet weather is a factor in something like 20% of fatalities. To be totally fair, you might also adjust up the human’s mileage to account for the fact that people are probably (smartly) only going hands-off with the Autopilot when it’s nice out, or under other unchallenging conditions.

how-teslas-self-driving-autopilot-actually-works-57b22f49fd2e612427000025mp4-shot0003And if people are obeying the Tesla terms of service and only using the Autopilot under strict human supervision, the single fatality would certainly be an underestimate of what would happen if everyone were driving hands-free. This video demonstrates the Tesla freaking out when it can’t see the line markings. Everything turns out fine because the human is in control, but imagine he weren’t. Relative to the assumption that the Autopilot is running in fully self-driving mode, accidents and fatalities are certainly (thank goodness) too low.

But given this selection bias — the Autopilot only gets to drive the good roads, and with some human supervision — the claim that it’s superior or even equal to human drivers is significantly weakened. We don’t know how often people override the autopilot to save their lives, but if they didn’t the fatality rate could easily be larger by an order of magnitude.

Sadly, Two Data Points

All of this is arguing around the margins. We’re still pretending that it’s August 2016, before the second fatal autopilot accident in China. The cruel nature of small numbers means that with two accidents under its belt, and 150 million miles or so, the Tesla Autopilot appears to drive like a drunken Montanan on the cell phone.

But as I said above, these estimates for the Tesla are terribly imprecise. With only 150 million miles driven, you’d expect to see only one fatality if it drove on par with humans, but you wouldn’t be surprised to see two. With small numbers like this, it’s impossible to draw any firm conclusions. However, you certainly cannot say that it’s “morally reprehensible” to keep Autopilot’s metaphorical hands off the wheel. If anything, you’d say the opposite.

More Data

What we need is more data, but we won’t ever get it. Any responsible autonomous vehicle company will improve the software whenever possible. Indeed, just before the Chinese accident, Tesla announced a software update that might have prevented the US fatality. But every time the software is redone, the earlier data we have about the vehicle’s safety becomes moot. The only hope of getting enough data to say anything definite about the safety of these things is to leave the design alone, but that’s not ethical if it has known flaws.

So assessing self-driving cars by their mileage is a statistical experiment that will never have a negative conclusion. After every hundred million miles of safe driving, the proponents of AI will declare victory, and after every fatal accident, they’ll push another firmware upgrade and call for a fresh start. If self-driving cars are indeed unsafe, irrespective of firmware updates, what will it take to convince us of this fact?

Lost in the Fog

Until there’s a car with a two or three billion miles on its autopilot system, there’s no way to draw any meaningful statistical comparison between human drivers and autonomous mode. And that’s assuming that people are violating the Tesla terms of service and driving without intervention; if you allow for people saving their own lives, the self-driving feature is certainly much less safe than it appears from our numbers. And that’s not too good. No wonder Tesla wants you to keep your hands on the wheel.

Even if self-driving technology were comparable to human drivers, that will mean twenty or thirty more deaths before we even know with any reasonable certainty. Is it ethical to carry out this statistical experiment on normal drivers? What would we say about this if we knew that that self-driving were unsafe? Or is this just the price to pay to get to an inevitable state where self-driving cars do drive provably better than humans? I honestly don’t know.

quote-seduce-you-into-trustAutomotive safety wasn’t invented yesterday. There are protocols and standards for every fastener on the car’s frame, and every line of firmware in its (distributed) brain. These are based on meeting established reliability and safety measures up-front rather than once they’re on the road. Whether current practices are even applicable to self-driving cars is an interesting question, and one that the industry hasn’t tackled yet.

So in the meantime, we’ve got a muddle. Tesla’s Autopilot is good enough to seduce you into trusting it, but it’s still essentially statistically untested, and on first glance it drives like a drunk or worse. On one hand, Tesla is doing the best they can — collecting real-world data on how the system responds while warning drivers that it’s still in beta. On the other hand, if people were dying behind the wheel due to a “beta” brake-disc design, there would be a recall.

Just to be clear, I have no grudge against Tesla. [Elon Musk] said his tech was safer than human drivers, and the statistics in no way bear this out. Tesla is way out ahead of the other auto makers at the moment, and they’re making bold claims that I totally hope will someday be (proven to be) true. Time, and miles on the odometer, will tell.


Filed under: car hacks, Current Events, Featured, Original Art, rants

No comments:

Post a Comment