With end-to-end IoT solutions and services,
we are the people powering IoT.

Blog Post

Does the self-driving car need to be dumber to improve its accident avoidance?

by Norman Miglietta
07/08/2015

Blog_Self-driving

 Last month, Google revealed that its autonomous cars have been involved in 11 minor accidents since the program began six years, and 1.7 miles driven, ago. That’s a “property damage only” accident rate of about 6.5 per million miles traveled, which is considerably higher than the national average of 2.8 per million miles (2012, National Highway Traffic Safety Administration data). The company is steadfast, however, in its conviction that the accidents were all the result of human error. Their cars were not at fault.

"Even when our software and sensors can detect a sticky situation and take action earlier and faster than an alert human driver, sometimes we won’t be able to overcome the realities of speed and distance; sometimes we’ll get hit just waiting for a light to change,” says Google’s self-driving car program directly Chris Urmson, in a blog post.

 If you dig further into it, the self-driving accidents do seem to fall into a few main categories: 

 “… we’ve been hit from behind seven [of 11 total] times, mainly at traffic lights …”

“… we’ve also been side-swiped a couple of times …”

“… [we’ve been] hit by a car rolling through a stop sign …”

 Looking at the majority of collisions reported by Google (I.e., self-driving car gets hit from behind at a traffic light), it is fairly easy to extrapolate the likelihood of what may have occurred: The self-driving car stopped short on a yellow light when, by reasonably accepted driving standards, the driver behind would have expected the car to continue on through the light.

Given that, perhaps a question that needs to be considered is, “Do self-driving cars need to be dumber to become better at avoiding mishaps?” And by dumber, we mean “acting more like a humanlike.”

 There are of course limits to how literally we should expect a machine to interpret human behavior. We can go back to the 1984 John Carpenter classic Starman for an extreme example:

 

 One of the more interesting things I learned in driver’s education way back in high school, and a factor I consider almost every day as I go about driving in the city, is that intersections have an indicator of the “point of no return” for going through a yellow light; it is indeed safer (and legal – blame my instructor if that’s wrong) to continue from that position than it would be hitting the brakes (hard) to stop. On the approach to most intersections, there is a point where the lane markings go from dashed line to solid. Sometimes it’s indicated by the start of the left turn lane. In any case, the way I learned it, if your car is within the confines of that solid line when the light turns yellow, you should keep going.

 It could very well be that Google’s car developers have programmed this factor into the car’s modus operandi. Fact is, unfortunately, most humans probably interpret the rule much more liberally than its intent (if they are aware of it at all). The unintended consequence: Despite following the letter of the law, if the Google car stops short at every yellow light it encounters chances are, over time, those rear end accidents will continue to add up. The car needs to integrate a sense of “best judgement” into its actions.

 On the other hand, the developers do seem to have cataloged the common human errors that portend major collisions on the road (lane-drifting, red-light running). Presumably, this means they’ve also programmed the the car to, in a sense, expect such anomalies and know the proper evasive actions to take. 

 Overall, these news items show us there is mixture of art and science that goes into defensive driving. Google has no small task ahead of it to build the ultimate defensive driver in its autonomous car; all of us as can learn something as drivers as the success, and shortcomings, of the development process continue to be exposed.