Handling the Inevitable Driverless Tech Accident

Many of the proponents for autonomous vehicles say they are safer than human drivers, reducing accidents, deaths, and injuries.

However, an article by Tony Gillespie, visiting professor, electronic and electrical engineering at the University College London (UCL) in Engineering and Technology magazine says it is inevitable that the vehicles themselves will be involved in accidents. The potential reasons why they could crash range from software bugs, operator failure, to electronic and mechanical issues.

Speaking to TU-Automotive, he explains: “Safer does not mean completely safe. Most current accidents are owing to human error while driving. Accidents because of a vehicle failure or the road infrastructure are relatively rare. A few are down to lack of maintenance, but MOT testing has reduced their frequency.”

“However, autonomous vehicles (AVs) will have both increased numbers and complexity of their critical systems compared with current vehicles. Therefore, it is very optimistic to expect current low failure rates to apply to AVs even without the complexities from the interconnections for connected AVs (CAVs).”

Software-intensive critical systems

Most critical systems with be software-intensive, requiring frequent upgrades. They will in effect be part of a system of systems and he claims that each one will have a non-zero failure rate. They will have several interacting and changing systems as well as subsystems. This will make accidents inevitable. He explains: “It is anticipated that overall safety will improve significantly but only when the human driver is removed from all road vehicles. This situation (SAE autonomy Level 5 for all vehicles) is unlikely to be reached for decades.” Human error will be the culprit up to that point.

Dr Ehsan Torieni, assistant professor in the department of computer science at Durham University, believes that safety is not an objective: “There will never be a faultless driving system mainly since at the end of the day, they are implemented by humans. You can’t prevent human errors in implementations or testing.” He, nevertheless, agrees that autonomous driving will be safer than human driving as machines can’t be distracted by talking or tiredness.

Transfer of responsibility

The synopsis of Gillespie’s book, Systems Engineering for Ethical Autonomous Systems, published by the Institution of Engineering and Technology (IET), also argues that the transfer of responsibility from human to machines presents some significant issues “for all those concerned with new concepts, their development and use”. He argues that is it vital to look beyond connected and autonomous vehicles, to consider the evolving ethical and legal environment, and systems engineering approaches, which are used in the development of weapon systems for the military – as well as for civil applications.

The IET also argues for cross-modal approach in its paper, Advancing Safety in Transportation through Automation.  It the report’s recommendations it says: “Vehicle and vessel safety is only a part of overall safety when it comes to automated vehicles. The focus needs to be on system safety, and we need to take full advantage of cross-modal learning and standardization of approaches. By promoting a common approach to safety analysis and standards, the transport industry will be able to share scarce specialist resources.”

While the human driver may still be seen as the main cause of accidents, there will be occasions when they will crash without a human driver being involved. Torieni points out that there will be hardware or software malfunctions, which could be the main cause of accidents. “For instance, Loss of connectivity to other cars or malfunction of a specific sensor could lead to incorrect perception of a potential risk and cause accidents,” he explains.

Problems could also arise from the algorithms used to drive autonomous vehicles. So, while he argues they are more accurate than humans in controlling the predictable and deterministic situations, their weakness is that they work “with an approximation when the situation is volatile and dynamic (where there are numerous unpredictable scenarios like driving).” These algorithms could malfunction and the approximation of the response to a scenario could be incorrect, leading to accidents.

Design shortcomings

Gillespie adds: “Accidents will happen at all autonomy levels due to shortcomings in AV design when the vehicle is faced with unusual situations and unforeseen actions by humans, animals and other objects in the scenario. The type of accident will probably be different for different autonomy level vehicles.”

He says the road vehicle industry recognizes that a change in mindset is needed to make highly-automated cars safe. “Safety requirements must now consider some of the behavioral problems that appear in artificial intelligence (AI)-based decision-making applications,” he explains before commenting that the vehicles’ situational awareness is dependent on data from their sensors and, with “limited performance interpreted by algorithms which must respond in time scales down milliseconds”.

There are a number of other factors to consider, including how legal liabilities need to be properly and clearly defined inevitably, establishing where responsibilities would lie in the event of an accident and particularly as a result of a technical malfunction.

Design to meet society’s expectations

Despite the complexity of this, he adds: “In my view, AVs and CAVs can only be engineered to meet society’s expectations for risk and responsibility by using a hierarchical architecture such as 4D/RCS with in-built limits on the authorization of actions within every node. (Their authorized power) This will need to complement safety activities such as Safety of The Intended Functionality (SOTIF) and System Process Theoretic Analysis (STPA).”

Much work, he says, is being done by all of the relevant professions to anticipate AV problems, and these discussions are often taking place through professional bodies such as the IET. Collaboration between governments, the automotive manufacturers, the insurance and IT industries is essential to ensure that the vehicles are safe, and to determine the correct apportioning of liabilities and responsibilities in the event of accident. Through such liaisons, they can work to prevent accidents. “One example is Automated Lane Keeping System (ALKS) where technical design requirements are under discussion based on a United Nations requirement”, he reveals.

Tech and legal alignment

So, how are manufacturers, insurance companies and regulators working to improve the technology to prevent accidents, and to resolve legal liability issues?  Torieni believes that the laws and regulations need to be aligned with technology.  He explains: “For instance, in the context of responsible machine learning (which directly relates to autonomous driving), there is still no reliable mathematical definition for terms such as fairness and explicability; while almost all policy-making document mention the algorithms should be fair and explainable. The policymakers should closely collaborate with researchers and engineers to give tangible pathway for implementation of such systems and propose concrete guidelines that helps engineers to implement trustworthy systems.”

There is still work to do to ensure that CAVs – and particularly AVs – can operate safely and even safer than a human-driven vehicle. Gillespie concludes by suggesting that commercial pressures and litigation will at the end of the day create the impetus to improve safety, but the pace will be slow and expensive.

He therefore calls on the UK’s Department for Transport to establish “an independent national road accident investigation body and actively work with the United Nations to harmonize technical regulations. This will make the interaction of the law and technical advances efficient and for the benefit of the public.”

Torieni concludes that policy standardization for safe autonomous driving would be a good first step. The algorithms must be carefully designed to make them reliable, fault-tolerant, safe, and secure against various cyber-attacks attacks and errors too.  These include what are called poisonings, adversarial models and there are other common attacks and malfunctions related to AI-based algorithms. So, the prevention of the inevitable AV accidents is very much a complex work in progress.


Leave a comment

Your email address will not be published. Required fields are marked *