Nvidia Makes Life a (Simulated) Hell for Self-Driving Systems

Imagine living a Groundhog Day commute, one where each day you run into rain, snow, blinding sun and red-light runners — while making the same left turn again and again and again.

Rather not relive that nightmare existence? That’s okay. You can leave it to Bill Murray’s movie — and graphics-chip giant Nvidia. The vendor simulates highways to hell, replete with nightmarish scenarios of weather and bad drivers, in software and uses them to train and test autonomous driving platforms. Simulation testing is necessary and likely to grow even more in the wake of Uber’s recent fatal self-driving car crash, CEO Jensen Huang said last month.

Nvidia isn’t the only company doing this. Google-backed autonomous vehicle developer Waymo, for example, says its cars have traveled billions of miles in simulation in addition to millions of miles in the real world. But Nvidia’s technology could end up in self-driving cars from many manufacturers. It claims more than 370 automotive partners, including Audi, Toyota, Tesla and Volvo.

The artificial intelligence systems Nvidia develops for driving learn by experience how to detect and respond to situations, much like humans do. Road tests are part of that, but they don’t give AI enough challenges. Test cars can log day after day of “boring miles,” repeatedly collecting the same information, said Danny Shapiro, Nvidia’s senior director of automotive.

“To develop a robust system, we need to be able to handle the things that you may never even see in your lifetime,” Shapiro told The Connected Car.

Nvidia draws upon its videogame heritage and powerful supercomputers to create the virtual and modified worlds that challenge its hardware and software. To decide what to simulate, it goes where the problems are.

The process begins with training neural networks — the artificial intelligence systems that make driving decisions. These networks rely on the vehicle’s sensors to detect what’s going on around the car, so Nvidia trains them by using high-definition video and other data to simulate inputs from sensors.

Nvidia builds some driving situations from scratch, much as videogame designers create photorealistic worlds. But other simulations are simply modified versions of scenes cars captured on real drives. For example, if a test vehicle once encountered a stalled car on a freeway on a sunny day, developers add simulated rain, snow or night-time lighting, then train a neural network to deal with the stalled car in all those conditions.

Inputs are limited to conditions that those sensors — primarily cameras, radar and Lidar — can detect, Shapiro said. So even if rain would reduce tire traction, for example, Nvidia couldn’t simulate that directly unless traction detectors were part of a car’s sensor suite.

Automakers and other self-driving platform builders can continue training and testing in simulation as they refine how their own products respond to given conditions, Shapiro said. A car’s software could be trained, for example, to activate a traction-control system when it detects rain.

Because it’s designed to complement many uneventful miles of real-world training, simulation training focuses on situations where human-driven cars have the most problems. Statistics from agencies like the National Highway Traffic Safety Administration show where most accidents occur — including large, busy intersections — so designers craft a lot of scenarios in those areas, Shapiro said.

This so-called “software-in-the-loop” simulation for training AI can be done entirely in a data center. But in an actual vehicle, neural networks can only perform as well as the hardware they run on. To address this, Nvidia’s Drive Constellation platform, unveiled last month, brings hardware into the loop by feeding simulations into the computers it built to run its software in vehicles. This shows how well the whole system can deal with inputs and make the right driving decisions, according to the vendor.

Preparing neural networks to deal with rare scenarios isn’t as hard as it might sound because of AI’s ability to partially learn autonomously. “We’re not programming it explicitly … so we don’t necessarily have to come up with every possible scenario in the world,” Shapiro said.

Leave a comment

Your email address will not be published. Required fields are marked *