AI: The next smart step for automotive

Artificial intelligence – machines that can learn and act on their own – is enabling the automotive industry to make great leaps toward autonomy. In a few cases, it's already on the street. In research centres and experimental vehicles, it's helping cars understand the world around them.

Artificial intelligence (AI) has become a buzzword, with excitement about its potential in many sectors, including finance, healthcare and automotive. Sometimes the definition is blurred: AI is used to describe a variety of methodologies and goals.

Purists insist that artificial intelligence requires that a machine can think like a human.

While the installation rate of AI-based systems in new vehicles in 2015 was only 8%, IHS Markit firm believes that by 2025 the figure will be 100%, meaning there will be more than one ECU per produced car relying on AI technology.

The two AI technologies that are powering the automotive space are machine learning and deep learning.

Machine learning:This methodology uses large amounts of data to train computers to perform activities such as recognising a stop sign in the real world. Instead of a software engineer programming every step of this process, the computer system is fed immense amounts of visual data in the form of photographs of stop signs in a wide variety of scenes, lighting and weather conditions. It then creates algorithms to recognise the stop sign and refines them as it receives more input. Eventually, the system learns what a stop sign looks like and can recognise one it's never seen before.

Deep learning:This methodology links together many powerful processors that process information in layers. Each layer focuses more closely on areas of interest and ignores the rest of the data, while each processor is linked with others in a network sometimes known as a “neural net”, because is simulates the interconnectivity of neurons in the human brain. In the stop sign analogy, the first layer might identify the edges in an image; it would then pass that information to a processor in the next layer. Deep learning has powered advances in image classification, speech recognition, language understanding. It's being used now for computer vision in automotive systems.

In addition to these methodologies, AI needs plenty of data to work with.

Big data: Huge amounts of data are needed for machine learning. To train an autonomous driving system, both computer simulations and real-world information captured by cameras mounted on vehicles is used. Some driving data sets have been offered for research and commercial use, including Mapillary’s Vistas Dataset for object recognition on street-level imagery, and Daimler's Cityscapes, which offers stereo video sequences recorded in street scenes from 50 different cities.

Use cases on the road today

Computer vision and sensor fusion are two of the major use cases for AI, according to Strategy Analytics automotive analyst AngelosLakrintis. Deep learning is used to train the system's model of the real world. A deep learning system can take the massive amounts of data generated each second by the multiple sensors in the car and translate it into an accurate depiction of the surroundings.

Danny Shapiro, senior director of NVIDIA, explains that this process has two phases: first, the AI models are trained using data. Once the AI is trained to, for example, recognise a stop sign, the same neural network is deployed in a vehicle. Then, as the car drives around, the AI system can recognise what is coming from the camera feed.

“It's an iterative process, where they train the AI, put it in the car to test whether it's accurate or not. They send the data back to the data centre, run it through the neural network there and deploy it back into the vehicle,” Shapiro says. “That's why the ability to update over-the-air is so important.”

Toyota’s Guardian Angel, announced in May, could be the biggest next step for in-vehicle AI, according to Lakrintis. It will run in the background, watching for situations such a driver distraction or an impending collision, and warn the driver to take action. Toyota said that a production ready Guardian Angel could be available on 2020 models, using NVIDIA's DriveWorks and Drive PX 2.

Personalisation

Another important use for artificial intelligence will be understanding an individual driver's behaviour in order to take the virtual assistant model to the next level.

Virtual assistants parse big data to analyse someone's behaviour, inferring preferences and future needs. Some use machine learning to continually get better at their tasks. Siri, Alexa, Google Assistant, Cortana and IBM Watson have been launched, planned or demonstrated by most of the top carmakers, according to Lakrintis. In connected cars, they could suggest a restaurant along a route or remind the driver of a meeting. BMW and Hyundai have already launched Alexa skills such as locking the door with a voice command.

Mercedes Benz's Fit & Healthy concept, introduced at CES 2017, would use a variety of in-cockpit sensors to measure the driver's vital parameters in order to provide a tailor-made offer for enhancing his or her well-being. AI could offer a stress-free route on the navigation system and select music to suit the driver's mood and traffic conditions.

Today's virtual assistants are based on artificial intelligence algorithms working in the background, according to Luca De Ambroggi, principal analyst for automotive electronics at IHS Markit. They can become more complex, thanks to more advanced neural networks.

For example, he says: “If I tell the virtual assistant that I want to eat some spaghetti, it knows to look for an Italian restaurant. The virtual assistant is able, from the context, to elaborate an appropriate answer. This is coming possibly within the next couple of years.”

German Autolabs’ Chris is another pending example. It's billed as a “digital co-driver” and will begin by enabling voice or gesture control for the common infotainment functions: messaging, email, navigation, calls and music. The company says that unlike Alexa et al., this aftermarket device has been designed specifically for drivers. It's powered by advanced AI and natural language processing.

Holger Weiss, CEO of German Autolabs, explains that today's networks and cloud computing, plus a level of acceptance from consumers mean the time is right for driver assistants to make driving easier, safer and more predictable.

At the same time, the aftermarket device is a way for German Autolabs to quickly build up a large data set that can later be embedded in new cars, according to Weiss.

Because of longer product cycles of automakers, Weiss says, if his technology were to be included in production vehicles, it likely wouldn't hit the road and begin collecting data on which to train AI until 2025.

German Autolabs' approach is: “Let's use the time to train our AI on what we have now. In order to serve the automotive industry three or four years from now with mature software and a dataset, we start now in the aftermarket to collect data.”

With all the different assistants now in the market, one issue is how they might work together. Might a driver have Alexa at home, Siri on the phone and Chris in the car? Could Chris use Alexa skills? Could personalisation information from Chris be transferred to Alexa?

Says Weiss: “That is something no one knows. At some point, you will be able to say to Chris, ‘Have Alexa order a pizza’. But to do that, we need a holistic interface or an exchange standard. This has to be overcome on an ecosystem level.”    

The next challenges

The next hurdle is moving from an autonomous vehicle that can identify what's around it to one that has true situational awareness and can reliably take the right action.

The first challenge, according to Silvio Savarese, director of the SAIL-Toyota Center for AI Research, is for computer vision systems to recognise objects that are severely occluded: a person behind a car, that stop sign obscured by branches, or anything in low-light or low-visibility conditions.

The second challenge is procuring enough data. “Deep learning and machine learning require a lot of data. Even when the data is abundant, it doesn't tend to be very robust,” he says. While there may be abundant data derived from driving, that body of data may not include enough instances or rare occurrences, such as a partly hidden stop sign.

NVIDIA’s Shapiro adds that, while artificial intelligence has vastly improved virtual assistants and voice recognition, automotive AI is more challenging because of the safety issues. He says: “We see the need for a combination of actual driven miles and simulations to create scenarios that are too dangerous to have actual cars, for example, a child running out in the middle of the street.”

The third problem to be solved is situational awareness: the vehicle's ability to understand all the fixed and moving parts in a scenario and make the right decision. “This is where current technology fails,” Savarese says. It's also the reason why most of today's tests are done on highways or in controlled environments like office parks.

In search of algorithms

To solve these problems, Savarese thinks the industry must invent new architectures and new algorithms. “An off-the-shelf deep learning model can't do it,” he says.

Dr Steven Peters, head of the AI research team within Corporate Research at Daimler AG, agrees that more algorithms are necessary – and not only for deep learning. He says: “Deep learning is not the only approach. A lot of highly specialised algorithms are interesting.” For example, an algorithm could potentially enhance a poor data set or manage how artificial intelligence can learn while following a special privacy regulation. “Finding the best algorithm is a difficult task,” Peters notes. “But sometimes, it's not a case of making the best algorithm even better; it's finding the most suitable algorithm for a specific use case.”

That is a large focus of Tombari's research. He says: “One of main directions for us is not just providing new algorithms that are more accurate and precise but also that are particularly suited to not draining all the computational power available, so you can run other algorithms in parallel.”

Then, there's checking deep learning's self-created algorithm. DeAmbroggi of IHS says: “A good algorithm is one thing but the other problem is that the safety of this type of algorithm is not clear. If you want to base the system on neural networks or deep learning, you need to be sure that this algorithm is certifiable, for example via ISO26262, that you can somehow control how the decision is made. This is not in place today."

He foresees the possibility of something like a driving license for an AI system: a standard certification process to make it won't run over the dog. He thinks this would need to happen at the automaker level: “When the architecture is in place, the OEM will need to run through this standard certification. When you buy the car, this has been done.”

[Auto. Kuchinskas.2017.06.13]


Leave a comment

Your email address will not be published. Required fields are marked *