Driver mistakes play a role in virtually all crashes. That’s why automation has been held up as a potential safety game changer. But autonomous vehicles might prevent only around a third of all crashes if automated systems drive too much like people, a recent analysis by the Insurance Institute for Highway Safety finds.
While fully self-driving cars may eventually identify hazards better than humans, that alone won’t prevent the bulk of crashes. Conventional thinking has it that self-driving vehicles could one day make crashes a thing of the past. The reality is not that simple.
According to a national survey of police-reported crashes, driver error is the final failure in the chain of events leading to more than nine out of 10 crashes.
But the Institute’s analysis suggests that only about one-third of those crashes were the result of mistakes that automated vehicles would be expected to avoid simply because they have more accurate perception than human drivers and aren’t vulnerable to incapacitation. To avoid the other two-thirds, they would need to be specifically programmed to prioritize safety over speed and convenience.
To estimate how many crashes might continue to occur if self-driving cars are designed to make the same decisions about risk that humans do, IIHS researchers examined more than 5,000 police-reported crashes in a federal government database.
They reviewed the case files and separated the driver-related factors that contributed to the crashes into categories such as “sensing and perceiving” errors, which included things like driver distraction, “predicting” errors like misjudging a gap in traffic, or “planning and deciding” errors, including driving too fast or too slow for conditions.
The researchers also determined that some crashes were unavoidable, such as those caused by a vehicle failure like a blowout or broken axle.
The researchers imagined a future in which all the vehicles on the road are self-driving. They assumed these future vehicles would prevent those crashes that were caused exclusively by perception errors or involved a driver who is incapacitated by drugs or alcohol or has fallen asleep. That’s because cameras and sensors of fully autonomous vehicles could be expected to monitor the roadway and identify potential hazards better than a human driver and be incapable of distraction or incapacitation.
Crashes due to only sensing and perceiving errors accounted for 23 %of the total, and incapacitation accounted for 10%. Those crashes might be avoided if all vehicles on the road were self-driving — though it would require sensors that worked perfectly and systems that never malfunctioned. The remaining two-thirds might still occur unless autonomous vehicles are also specifically programmed to avoid other types of predicting, decision-making and performance errors.
Consider the crash of an Uber test vehicle that killed a pedestrian in Tempe, Ariz., in March 2018. Its automated driving system initially struggled to correctly identify 49-year-old Elaine Herzberg on the side of the road. But once it did, it still was not able to predict that she would cross in front of the vehicle, and it failed to execute the correct evasive maneuver to avoid striking her when she did so.
Planning and deciding errors, such as speeding and illegal maneuvers, were contributing factors in about 40 %of crashes in the study sample. The fact that deliberate decisions made by drivers can lead to crashes indicates that rider preferences might sometimes conflict with the safety priorities of autonomous vehicles.
To eliminate these kinds of crashes, self-driving vehicles will need to put safety first. That means not only obeying traffic laws but also adapting to road conditions and implementing driving strategies that account for uncertainty about what other road users will do, such as driving more slowly than a human driver would in areas with high pedestrian traffic or in low-visibility conditions.
Was this article valuable?
Here are more articles you may enjoy.