Driverless Tech Must Predict the Future to be Safe

Common sense suggests it’s no good connected and autonomous vehicles just operating according to known scenarios, obstacles and potential causes of accidents.

CAVs must also be able to predict doom and be able to react to uncommon scenarios.

Yahoo Finance reports that “a new self-driving car algorithm developed by researchers at Germany’s Technical University of Munich (TUM) thrives on thinking about the worst thing that could happen at every moment and then figuring out how to get out of it without endangering or obstructing traffic.” Driverless start-up Cruise has developed an artificial intelligence tool, called Continuous Learning Machine, which automatically labels and mines training data to enable connected and autonomous vehicles to react to unpredicted events such as bicycles swerving onto road amidst traffic, or kids running into the road.

Predicting doom

So, to what extent have connected and autonomous vehicles been able to predict doom and uncommon scenarios to improve vehicle safety? Paul Campion, CEO, Transport and Research Laboratory (TRL) responds: “The whole notion of AI is based on the creation of patterns, pattern recognition – learning from large quantities of data. No human is born to know how to drive a car safely and computers are no different. So far, they are less effective than humans; they have to be given more data.”

The traditional approach to test systems is to drive thousands of miles. This problem with this approach is that this isn’t often a viable proposition. Testing autonomous vehicles on real roads is unsafe and, for that, reason he says: “We don’t want to let untrained computers out onto them. The sheer unpredictability means that we would have to train computers to drive millions and millions of miles to ensure that they are safe.” Even connected vehicle technologies should ideally be tested virtually in simulated environments and on tracks to ensure that they can be used safely on public roads. He, therefore, says there is a need to train computers in simulated environments “to go through a whole range of environmental conditions – to, for example, operate in rain, snow, or fog”.

However, there can’t be any dependency on data modelling alone. What is learned in simulation has to be tested in the real world to validate the veracity of any finding, including how a vehicle can operate autonomously and cope with different driving events without sufficient connectivity being available. To predict doom connected and autonomous vehicles must have the ability to collate, storage and analyze real-time data in the way human drivers do to react to different situations.

Smart Mobility Living Lab

He explains: “TRL is responsible for the Smart Mobility Living Lab – a testbed that runs on 24kms of London roads. Only vehicles that can be driven on public roads can use it. You can’t just take an untrained AI engine and put it onto the public roads. So, we have built a virtual twin – a digital model of the real environment, designed to incorporate a representative range of road types and challenges.”

These challenges include ensuring that autonomous vehicles can deal with roundabouts, zebra crossing, traffic lights, and “bits of the road where the road markings aren’t so clear.” With this approach is becomes possible to train artificial intelligence systems safely, while putting hazards into a virtual driving environment.

He adds: “The real and the digital twin, working in parallel, enable us to design, test and teach. We can then take them out into the real world to prove that what we have done is effective. This has to be done with a human driver. We need to ensure that we have done as much as we can before we let it out into the real world.”

Evolving with data

However, Varun Krishna Murthy, senior research analyst, Frost and Sullivan, refers to Waymo’s research. It has been driving autonomous vehicles over millions of miles over close to a decade. As a result of this testing Level 4 vehicles on public roads, the company will have come across both successful and unsuccessful scenarios and therefore have an ability to evolve its artificial intelligence and machine learning systems within its vehicles to address any challenges revealed in its tests.

He adds: “Unfortunately, in one incident involving an Uber vehicle, a pedestrian was killed in 2018 in Arizona.  There have been multiple collisions where Tesla has been involved but I wouldn’t call Tesla fully autonomous as a driver was involved. Regulations and an approval mechanism are what is required to ensure that autonomous vehicles are safe in any situation.”

Algorithm flexibility

There is, therefore, a need for the AI algorithms, such as the one developed by Germany’s Technical University of Munich (TUM), to be flexible. Vishal Singh, CTO of Zenduit, explains that TUM’s algorithm’s purpose is to constantly predict the worst possible situation. He says it can work but from a computation perspective it could turn out to be “extremely computationally expensive” and he’s not sure that it can feasible be accomplished quickly.

He explains: “To improve the algorithm would be similar to any other machine learning algorithm, given enough data on uncommon events, and given actions to execute if such events occur would allow these algorithms to improve by increasing accuracy and the amount of classes it can predict.”

Machine learning, he elaborates, works by using data to classify certain events into categories. With them you can write an algorithm to handle each situation. Regarding uncommon events, ones that aren’t contained with the training data set, he finds that it is possible to “identify the uncommon occurrences by using anomaly detection.” Yet, there is often no category for uncommon occurrences, and so “there is no way to determine an action following the classification as an anomaly.” 

‘Relatively straightforward’

Campion also finds that it is “relatively straightforward to provide a car that can operate in a known environment, but not in the edge cases.” So, the prospect of Levels 4 and Level 5 autonomous driving on city streets is in his opinion still a long way off.

It’s important to realize that these technologies can operate within limited use cases and environments. This means “the extreme levels of 4 and 5 are going to remain challenging”, he says before adding: “There are already autonomous shuttles but whistling up a car and saying, “I want to go to Glasgow”, is very unlikely in the next five years.”

Improving predictability

Krishna Murthy finds that the ability of the vehicles to predict doom and certain uncommon events is getting better day by day: “The likes of Tesla Level 3 autonomous vehicles, are also collecting data and learning. When fully autonomous vehicles are on the road, there will be no human drivers. At Level 3 a driver will need to be able to take control but the world is working towards Level 4 autonomy, where a vehicle can drive itself within specific geo-fenced region.”

He is confident that this will be achieved by 2025, while other industry commentators think that it may take much longer before we can see fully autonomous vehicles on our roads, driving themselves safely. For now, most companies working on a self-driving vehicle future are focused on testing. Through this testing it will be possible to collate and analyze data to improve scenario prediction capability, and thus improve safety.

Yet, as Singh points out, this is a time-consuming effort particularly as there is a need to define new ‘doom’ scenarios. For this reason, he thinks fully autonomous vehicles won’t be commonplace for another 5-10 years. The way to speed this up, he concludes, is by improving the way the data is shared. There is also a prerequisite to “establish some baseline and protocols in how the data is collected, analyzed and assessed.” Data sharing agreements are therefore needed to accelerate this process.


Leave a comment

Your email address will not be published. Required fields are marked *