Autonomous Vehicle Liability: Not Automatic Yet

By Jamie Bisker | August 3, 2015

A recent blog post asked a complicated question around the increasingly popular topic of insurance liability for accidents that involve autonomous vehicles (AVs — vehicles that can, as prototypes today, drive themselves on everyday streets and highways). The question is less than straightforward because it invokes ethics, but also implies a technological prowess that is more romantic than real.

The question is an important one because cars and trucks are today acquiring assisted driving features (adaptive cruise control, lane-keeping, etc.) that will lead to the fully automatic driving capabilities in the next three to five years.

The typical liability question raised about AVs centers around who or what would be at fault if an AV is involved in an accident. Talking heads from many media outlets may quickly seek to blame the automation or the car company that installed it, or its actual manufacturer. In reality, as always, it’s more complicated than that.

Assigning fault for an accident involving an AV would follow mostly the same decision tree as it does today: Was the product in question (in this case, the AVs driving system) in good operating condition? Did it fail in ways that contributed to the event? Was it interfered with by the operator?

These same questions and many others have been around for many years — was it an operator error or a product failure and thus a product liability situation? We have seen these situations pondered and resolved in cases of unintended acceleration, ignition switch failures, and faulty air-bags. Liability is either clear-cut, or it is handled via subrogation and the courts.

As AVs become more common, various situations will occur that will tax not only policy language and legal precedents, but our systems of ethics as well.

The thorny question is this: In a binary, no-win situation, where an AV’s actions will cause the death of either its occupants or a pedestrian, which one is the correct or most ethical choice? And, as important, where does the liability rest?

Rather than delve into the various arguments about policy language and legal precedents, I want to clear up the AV side of this. The idea that an AV operating in automatic driving mode makes a deliberate choice implies a willful mind with intent.

At this point in time, and even once AVs become more common, the artificial intelligence (AI) mechanisms that empower automatic driving will not be making ad-hoc ethical decisions.

The programs will evaluate the data presented and follow the rules-of-thumb (heuristics), cases, and decision trees manifested in their code. The AI will not be conscious, will not ponder philosophical questions, nor will it fret about decisions it made.

The key will be rules for lose-lose situations and how they are set and ultimately executed. I would expect that a straightforward rule for such a situation would be to spare those outside the vehicle since they have no protection versus those in the vehicle who are surrounded by safety systems in the event of an impact.

And, to be clear, accidents will happen and deaths might occur, but the occurrence of either situation will become quite rare.

Current AV experience points out that the challenge will be to avoid cars hitting you (and they’re working on that), not whether your automated car hits something or someone else.

Topics Auto

Was this article valuable?

Here are more articles you may enjoy.

From This Issue

Insurance Journal Magazine August 3, 2015
August 3, 2015
Insurance Journal Magazine

Top 100 Retail Agencies; Homeowners & Condos; Autos