Wrestling with risk around the driverless car

The chances are good that in a decade’s time some of us will own a car that is capable of driving itself in many, if not most, situations. Our trips will probably be guided by an in-vehicle system that communicates constantly with other vehicles on the road, road infrastructure and public transport databases to make the journey as safe, timely and ecologically sound as possible.
We and the carmakers must assume that the vehicle’s autonomous systems will function flawlessly. Yet bad stuff can happen, as the fender-bender with a bus suffered by one of Google’s self-driving cars suggests.
Interestingly, the crash occurred because the car and the human driver thought alike and made the identical assumption. However, it is widely expected that a future version of the car will not react like a human driver but like a perfectly designed machine. But what will motor insurance look like when this perfectly designed machine drives itself all or some of the time? This question is of special importance to usage-based insurance (UBI), which sets premiums according to the behaviour of a human driver.
To discuss the possible consequences of autonomous driving technology on insurance and liability, 13 UK motor insurers have come together with the Association of British Insurers (ABI) and Thatcham Research to form the Automated Driving Insurer Group. The many prickly questions they will attempt to answer include which party will be held liable when a driverless car crashes: the owner, the carmaker, the dealers or the systems manufacturers or some of them or all of them? Also what kind of insurance will be required as the automobile evolves through the various levels of autonomy?
Some car manufacturers have already taken a step toward clearing up some of the liability issues. Google, Mercedes-Benz and Volvo have each said they would accept liability in the event their autonomous-car technology fails on the road. Of the three, Volvo has been the most definitive. The Swedish carmaker announced it would assume “full responsibility” for collisions involving its autonomous cars.
Presumably, this means that the automaker will be insured for damages caused by any autonomous system error but what does “full responsibility” really mean? Stephan Appt, a partner at the law firm Pinsent Masons who specialises in the automotive and IT sectors, says that the answer is not as clear-cut as it appears.
“The question really comes down to what does the small print say? There might still be certain obligations the carmakers put on their customers, because such a highly sophisticated car needs to be put into the garage for service or maintenance every half-year to take care of whatever cannot be done over the air, for example. And then they can say that if you don’t follow the recommended maintenance schedule, we cannot guarantee that the autonomous car will work as it should.” Presumably, this will also require the car owner to have the car maintained in a carmaker-approved service facility.
So, even when the car is fully autonomous, there will still be some liability accruing to the car owner and even the driver, if the two are not the same. “Drivers would need insurance to cover their negligence in case they take off when there is a red light signalling a potential malfunction,” Appt says. He concludes that, having assumed responsibility for their autonomous systems, carmakers will, nevertheless, try to limit their liability in those cases where they need the cooperation of the customer or the driver to ensure that their systems function as designed.
In addition, car owners will need to have insurance to cover damage to their cars not related to driving. The US pay-per-mile insurer Metromile has concluded that the owner of a fully autonomous car in the US will have to pay about $250 (£176) a year to insure her vehicle.
“Our hypothetical pricing model for self-driving cars shows a significant reduction in annual insurance premiums but does not eliminate insurance all together,” Metromile CEO Dan Preston explained in an email. “These costs are owing to the fact that, yes, accidents with ‘dumb’ cars will still be possible and because we won’t be able to stop acts-of-god, like a tree falling on the car or a grocery cart running into the vehicle.”
Of course, the autonomous car will not be created overnight; it is a long work in progress, with a number of intermediate steps, some of which may require different risk models beginning with the introduction of the ADAS systems on which car autonomy is being built. In the UK, insurers have begun offering premiums discounts of up to 10% for cars equipped with approved advanced emergency braking systems (AEBS), simply because research has shown that they significantly reduce accidents and third-party damage. However, many people in the UBI industry believe that a driver’s over-reliance on AEBS should be seen as high-risk driving behaviour and be priced accordingly.
Metromile’s Preston shares this view. If there is a record of AEBS over-use, “the vehicle is signalling that the operator is a poor driver,” he says. “If insurers can collect such information from the vehicle, it could be used to surcharge the risk.” He adds that AEBS cannot prevent all collisions, “which means that a habitually aggressive and/or distracted driver would likely still cause an accident and be surcharged for that incident”.
Preston says that his company may also reduce its premiums for drivers of cars fitted out with ADAS systems. “New vehicle technology is coming out so fast that actuary models aren’t updated, so there’s not a lot of data to assess risk,” he says. “As we collect loss and expense data across the different safety features being rolled out, we will adapt our pricing to reward drivers who purchase vehicles with such advanced systems.”
Appt says that, as the car’s autonomy evolves, personal auto insurance cases will be reduced significantly, since the driver’s behaviour becomes less relevant, and the carmakers will be liable from a product liability perspective much more frequently than now. “Their risk in terms of product liability increases as the car moves toward full autonomy,” he says.
But what about those stages where man and machine share the driving and the responsibility for monitoring the driving environment? What is the best way to insure, for example, the driver of a car with Level 2 or Level 3 autonomy, defined by SAE International as “partial” and “conditional” autonomy, respectively?
One answer could be the pay-per-mile model, in which the driver pays only for the distance he has driven. The car itself could easily record this information and transmit it to the insurer. Metromile already uses a similar model in its partnership with the ride-share pioneer Uber.
“With Uber, Metromile acts as the personal insurance policy; all miles are accounted for and charged per-mile but then miles on the clock with Uber are deducted from the bill,” Dan Preston explains. “With autonomous cars, we imagine similar periods, one when hands are on the wheel and two when the car takes over. People will need insurance when their hands are on the wheel but probably the manufacturer will be responsible for the insurance when the car takes over. Metromile’s technology would allow drivers to differentiate the miles driven manually or in self-driving mode.”
In partial autonomy, Level 2, where autonomous systems have taken over steering and acceleration/deceleration functions, the driver is still expected to monitor the driving environment when the car is in autonomous mode. Finding out, for example, if the driver was competently monitoring the environment, and not speaking on his mobile phone, when a crash occurred.
And in conditional autonomy, the autonomous driving system will control “all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene”. Yet, how can we determine if the request to intervene came at the right time, or that the failure of the autonomous driving system was so severe that no human intervention could correct it?
Clearly, what is needed is a data-collection device similar to an airplane’s black box, a device – or several – that monitors both the car and the driver. And, most important from an insurance point of view, how is the risk in these different mixed man-machine situations to be calculated?
The Automated Driving Insurer Group clearly has its work cut out.