Google's driverless car was at fault in a wreck, and here's why that's OK

Contributed by
Mar 1, 2016

We have a long way to go until you can lock in a destination and have your car magically ferry you there, but Google is working overtime to get that future here a whole lot faster. 

But the AI behind the wheel just caused its first real-life car wreck. Here’s why that’s (mostly!) okay.

The accident was a relatively minor fender bender in Mountain View, and thankfully, no one was hurt. According to the traffic report, the car was traveling in autonomous mode and came upon a lane partially closed with sandbags around a storm drain. As the car was attempting to re-merge into the center lane, and the AI was expecting an approaching bus would slow down to allow the vehicle to enter, the bus did not allow entry and the car made contact with the bus. Sure, the bus driver should’ve let the car in — but it was still the robo-driver that hit the bus.

Here’s how Google explained the accident in its own report:

“Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day.

This is a classic example of the negotiation that’s a normal part of driving – we’re all trying to predict each other’s movements. In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.

We’ve now reviewed this incident (and thousands of variations on it) in our simulator in detail and made refinements to our software. From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future.”

This is interesting for a few reasons, and if anything, is an excellent teachable moment for the AIs that could be driving us around in a few years. Google’s driverless car experiment has been largely successful, with only a handful of accidents (and up to this point, virtually all of those were the fault of a human driver hitting the Google car). But in this instance the robo-car made an assumption most human drivers would make, and paid the price with some fender damage.

Of course, had someone actually been injured, we'd be talking about a whole different story. But Google's auto-drivers have largely been much safer than the typical human at the wheel, and that seems to hold true here. It's a scary thought to give a machine the control over something so potentially dangerous, but in the end, the safety benefits could be positively enormous. Case in point: Human error accounts for approximately 90 percent of crashes, and the robots have proven (overall) to have a much better driving record.

This incident allows Google’s AI to learn a valuable lesson, just as most teenage drivers do more often than not when they’re just getting started, and goes to show that even the smartest AI can’t fully account for the unpredictability of human emotion, reflexes and decision-making (at least, not yet). 

But, oh, just give it time. Give it time. 

(Via The Verge)

Make Your Inbox Important

Get our newsletter and you’ll be delivered the most interesting stories, videos and interviews weekly.

Sign-up breaker
Sign out: