Insuring robots and humans increases risk

Traditional motor insurance makes the driver of the vehicle potentially liable for any damage or injury caused by an accident, whether that involves the policyholder or a third party.

With autonomous vehicles it opens up many liability questions – particularly in the case when the car has complete control over itself. So on 11th July 2016 the UK’s Department for Transport and the Centre for Connected and Autonomous Vehicles opened up an industry consultation process to determine a way to answer them, and to allow a discussion about the government’s legislative, regulatory, Highway Code and insurance liability proposals as set out in its consultation document, Pathway to Driverless Cars: Proposals to support advanced driver assistance systems and automated vehicle technologies.

In the document, published by the Centre for Connected and Autonomous Vehicles, the government emphasises its enthusiasm for the development of autonomous vehicles: “In February [2016] we announced the winners of the first £20M competition from the £100M Intelligent Mobility Fund, which is being match-funded by industry to help facilitate the development of new connected and automated vehicle technologies.”

The paper also claims that the development of autonomous vehicles will lead to “the safe and efficient movement of people and goods”, while arguing that this “is key to our economic prosperity”. The government also believes that autonomous vehicle will facilitate this by “delivering social, environmental, and economic benefits to the UK by improving road safety – with over 90% of road traffic collisions caused by human error, automated vehicles could help to reduce death and injuries on our roads; enabling better use of road space – leading to improved traffic flow, with associated fuel savings; and enhancing mobility, giving access to those who currently cannot drive.”

Answers needed

That’s all great but how important is it have the support of the UK’s government in tackling autonomous insurance and has the consultation process – the findings of which are expected to be published in December 2016, gone far enough? “It has gone quite far but we don’t have the answers quite yet,” rightly says Thomas Hallauer – research and marketing director at Ptolemus. He adds: “They are looking at shared responsibility, which doesn’t currently exist because the driver has full responsibility for the vehicle.”

He also thinks that there will only be switch to Level 4 of autonomous driving once the legal framework has changed and, at Level 4 he says, all of the automotive manufacturers are talking about the vehicles being driverless, which means the driver will no longer be driving. “At the moment this is a legal and technical nightmare but driver assistance at Level 3 is less problematic because the driver is responsible for whatever happens to the vehicle. It will only switch to Level 4 once the legal framework has changed.

Changing liabilities

“At Level 4 all of the manufacturers are talking about driverless and so the driver won’t be driving the car. At the moment this a legal and technical nightmare. If you stick to Level 3, which is about driver assistance, you reduce this problem because the driver is responsible for whatever happens.” A case in point would be the two drivers who crashed their Tesla vehicles while it was on Autopilot: In the first accident the driver died but, fortunately, the second driver wasn’t injured in the second crash.

Tesla blamed the drivers for not using the system properly but the incidents have led German regulators to question the use of ‘Autopilot’ as a term to describe it, arguing that it gives the impression that the vehicle is more capable of autonomous driving than it really is.  Germany’s transport minister, Alexander Dobrindt has, therefore, asked Tesla to ditch Autopilot. Tesla has responded by claiming that it warns drivers of the system’s limits while defending the term. The company said in a press statement: “This is how the term has been used for decades in aerospace: to denote a support system that operates under the direct supervision of a human pilot.”

Alex Davies who reported on the story for Wired magazine, also says that other manufacturers, such as Mercedes-Benz, have been rebuked for how they promote this technology. In the summer of 2015 Mercedes reportedly described the new E-Class as “self-driving”. The danger is that automotive manufacturers could be opening themselves up to law suits if they make such misleading claims as this about the capabilities of their vehicles – most of which offer driver assistance.

Referring back to the consultation paper John Buyers, partner, head of commercial at Osborne Clarke, comments: “In terms of UKgovernment support, I think it’s essential. It’s not going to be resolved by the insurance industry alone. The proposal outlined in the paper are problematic. They work with the current technologies on the market, which is probably OK but there is going to be a need for a sweeping change in the way insurance is structured. There doesn’t seem to be any common assessment, no common standard is applied to autonomous vehicles.”

Benchmarking autonomy

“In my recent paper on liability for machine generated consequences, I talk about an independent evaluation mechanism or benchmark for determining how autonomous these vehicles really are.   The problem with the approach proposed by the UK government is that it seems to be perpetuating the current ‘fault based’ model, which seems to me to be rather expedient,” he says.  In his view fault-based liability works with simple machines “that are capable of limited autonomous decisions – about the current level of the state of the art [technology]”.  He thinks that this will become ever more problematic as these machines become more complex with greater ranges of decision-making.

“Tracking liability via ‘fault’ is always expensive and will become even more so as these machines become more complex because a typical autonomous car contains a variety of systems which relate to its navigation and driving – many of which are sourced from different manufacturers and specialist software providers,” he explains before posing the following questions: “How is a particular driving decision by a machine going to be pinpointed back to a fault and which manufacturer is likely to have caused it?  Can the fault in fact be isolated in this way?” The complexity of being able to answer these questions is exacerbated because AI machines, including autonomous cars are given parameters and guidelines. Yet the rules they are given and follow aren’t prescriptive. They are designed to learn.

Talking about the first Tesla accident he comments: “We probably all got the news of the Tesla driver who wasn’t paying much attention to the road and who allowed his car to drive into the back of a truck. The car was learning; the truck was white as was the sky – it couldn’t detect it.  It is arguable that there was no ‘fault’ with the car – all of its systems were working properly – it just hadn’t learned that particular risk.”

Learning from NZ

Moreover he feels that the proposed government approach will continue the litigation-led model, which he’d like to see replaced with one that follows a liability model that would be based on the New Zealand Accident Compensation Act.  “That law provides that in the case of all accidents (not just RTAs), insurance premiums are paid into a common pool that pays out at set tariffs if there is an appropriate injury,” he explains.  This means that there are no claims, arguments of litigation about who was at fault. So this model permits the compensation money that would have been spent on expensive lawyers goes to the claimants instead.

Missing points

Kurt Rowe, associate solicitor for market affairs technology and emerging risks at Weightmans also thinks that the government is missing some important points in the consultation paper: “The consultation is to inform the government’s Transport Bill but nothing is set in stone and we think there is a key point missing from the consultation and that is information-sharing and security.” He then explains that there is a “need to make sure autonomous vehicles work and that they interact with other agencies such as insurers who will need access to data to assist in issues such as the determination of liability”. He underlines, too, that there is a need to work on the security of semi-autonomous and autonomous vehicles because they are computers on wheels. As a result of this they are in danger of being hacked, which could lead to data being stolen or control of the vehicles being taken over by someone operating maliciously at a remote distance from them.

Charlotte Halkett, general manager communications at Insure The Box confers that there is more to be done: “Clearly there is much that still needs to be resolved in terms of liability for autonomous vehicles but the UKis in the enviable position of being able to draw on the experience of telematics insurance as part of this debate.”  She also claims that telematics has come a long from when Insure the Box “disrupted the insurance market with black boxes over 6 years ago and, as the number of miles driven grows, so do the insights that can play a valuable part in the development of autonomous motoring”.

Paul Stacy, founding director of Wunelli, argues that there is a need for more clarify: “I think we’d like to see a clear path to test the technology and guidance on the dual [coverage] approach – how it is going to work because the devil is in the detail, and it would be nice if we could have enough details to enable us to design insurance products that can be implemented.

“Motor insurance is mandatory and regulated but there will be no new investment in this new type of insurance without guidance and so we need to understand when it comes to bodily injury, specifically, the mode of when it goes back to the manufacturer versus the mode of the vehicle,” he explains.  He adds that there is a need to define what data is required to “trigger it back to the manufacturer versus to a traditional insurance policy”, and says that one options would be to construct autonomous zones on the highways “whereby the cars automatically change into autonomous mode as this would make the distinction [of who’s in control of the vehicle] clear”. The problem is that there are, as he points out, residential areas and urban areas in London where this won’t happen. So there is a need to build more infrastructure to support autonomous vehicles.

“In the short-term insurers will see this as being quite messy but, in the long term, insurers could become re-insurers for vehicle manufactures,” he suggests. Meanwhile, Rutger Van der Wall, vice-president of global products for LexisNexis Risk Solutions, emphasises that: “We expect direction on standards to create clarity in the industry because having standards in place is critical because people won’t otherwise understand each other.”

Dual coverage

Speaking with regards to dual coverage, Buyers thinks that it’s the correct approach. “I personally think that a dual coverage approach is correct because the fundamental driving (and hence liability) context is different when the car is driving itself,” he says before stressing how it may otherwise be hard to prove whether the car or the driver was in control at any given time. He adds: “If you look at it from a market perspective where the machines are fully autonomous, it is likely to be less expensive for the manufacturers if there is duality of control – which will probably mean that the manufacturers will lobby the politicians for this in any event, and analytics could be used to work out who’s at the helm.” He says he thinks Tesla does by sensing whether the driver is holding the steering wheel. The question of who’s in control can also be answered by using pay-as-you-drive or more to the point pay-how-you drive systems. Stacy explains how this could be done in his view: “How do you monitor who’s driving and at what point? Firstly, there is a need to understand when the car is driving. It may be solved with a smartphone driving signature. When people drive they have a unique signature which allows us to define who’s driving at any one time. People, at a microscopic level, drive differently. There are going to be certain technologies that will port across, such as driving signature and the ability to store as well as analyse data in real-time. Not all insurers are well-placed to deal with big data. You need a different set of skills. There will be third parties that will have a role in processing this data.”

Fronting claims

Rowe points out that the most UK legislation relating to insurance and to the use of motor vehicles on our roads emanates from the European Union. “Following the Consolidated Motor Directive, there is a need by the person using the vehicle to have civil liability coverage as part of their insurance policy, and this is applicable across the EU.” He adds that while insurers will have to front any claims there is a need for insurers to insurers to recover their money in the case of the systems failing.”

In other words software developers could be held liable for an accident. ”At Level 4 and 5 the liability of the individual is likely to  give way to the liability of the vehicle manufacturer and/or software developer but for now we are at Level 2/3 so we need to work it out now,” he comments. The insurance industry itself, he reveals, agrees that it would be better to “get the mechanism in place for Level 3 in as soon as possible as there is a real possibility that the driver, the manufacturer, the software developer or a combination thereof could well be liable for an incident depending on the unique circumstances”.

Increasing premiums?

Hallauer says the question about whether dual coverage will lead to premiums increasing was asked during the consultation. “It is possible that the premiums will increase – for non-autonomous vehicles they will but for Level 3 and Level 4 vehicles the premiums will decrease, and we can see that at the moment with vehicles fitted with ADAS,” he says before warning that ADAS doesn’t stop all accidents as it only prevents 30% of them. Even so he doesn’t expect there to be any visible impact on insurance premiums before 2020.

Rowe adds: “As these technologies becomes embedded and the volume and severity of accidents reduces, then the premiums will fall. If we use autonomous emergency braking (AEB) as an example, if this technology was fitted to all mainstream vehicles at the point of manufacture, we could reduce whiplash claims by 27% as the accidents wouldn’t have occurred because of the technology.” By the use of the term technology, data analysis and data sharing should also be mentioned as it can permit insurers to better understand risks in order to determine the right premiums.

Legislative considerations

Brexit won’t impinge on their ability to share data because the EU’s General Data Protection Regulation will come into force in 2017 and it’s likely that it will enshrined in the Great Reform Bill that Prime Minister Theresa May wishes to pass through Parliament. However, there is a reportedly some reluctance from vehicle manufactures to share certain types of data and so more needs to be done to encourage them to share data that will enable insurers to make better decisions.

Furthermore, the fact that the UK hasn’t ratified the Vienna Convention treaty, the experts interviewed for this article believe that the UK is well positioned to test autonomous vehicles. “The fact that we have not ratified the convention is a good thing as it has allowed the UK to be more flexible in its approach to the testing of autonomous vehicles and systems in our roads,” concludes Rowe after explaining that the Convention requires a human to be control at all times. In contrast a human driver only has to play a supervisory role in order for testing to take place. This approach means that testing of autonomous vehicles is already taking place in the UK with the government’s support.


Leave a comment

Your email address will not be published. Required fields are marked *