Intel’s RSS Framework Sets Safety Apart in AV Development

Safety is one of many things autonomous vehicles have to get right to succeed. But according to Intel, it’s different from all the rest.
The semiconductor giant takes that idea so seriously, it not only adopted a separate approach for developing safety functions, it’s also giving away a safety framework to form the basis of industry standards and bolster other companies’ self-driving platforms.
Intel’s campaign for its discrete approach to security, called Responsibility-Sensitive Safety (RSS), goes all the way to the top. Before his resignation last month — for unrelated reasons — former CEO Brian Krzanich met with auto industry executives and urged them to rally around RSS as a common way to make sure AVs are as safe as possible.
RSS is already on the road in tests that Intel’s Mobileye business is conducting in Israel and will soon launch in the US, the company says. It’s also being evaluated by industry partners, regulatory bodies in multiple regions and working groups at standards bodies, Intel says.
Unlike other aspects of AV performance, safety can be formally defined and verified in a way that everyone can agree on, Intel says. RSS is intended to keep vehicles in a “safe state” in which they are incapable of causing accidents.
Achieving that state requires a different approach from the kinds of engineering that go into sensing the world around a car or making driving decisions, says Dan Galves, chief communications officer at Mobileye, the AV startup that Intel acquired last year for more than $15 billion. Mobileye learned this through experience.
While testing its self-driving software in a simulator, the company found that its virtual vehicles sometimes caused collisions while carrying out tricky operations like merging. The software used reinforcement learning, a process by which an artificial intelligence tries a task many times and learns the best way to do it by recording and analyzing outcomes.
For example, the AI might go through 300 to 400 simulated runs of a merging maneuver, Galves said. On less than 1% of those runs it failed, which usually meant the car didn’t get where it was supposed to go. But in a few cases, it ran into other cars — learning along the way not to do whatever maneuver led to the crashes.
“The engineers realized that the reinforcement learning was allowing these accidents,” Galves told The Connected Car.
The system would make driving decisions by optimizing for several goals at once, such as driving as fast as possible, not scaring passengers and not annoying other drivers. When those goals conflicted with one another, the AI would weigh them against each other and determine on a percentage basis the best way to go forward.
Mobileye decided this “probabilistic” method wasn’t the best way to make safety decisions because safety was an absolute value, not a relative one. They needed a way to make sure their AVs simply wouldn’t cause accidents.
RSS is at the heart of this effort. It’s a set of verifiable mathematical formulas for what constitutes safe driving. The idea is that an AI that relies on these formulas can’t make driving decisions that violate the formulas.
For example, it’s possible to define a safe corridor around an autonomous vehicle based on how fast it’s going. With that corridor defined, the AV knows how close it can get to the car ahead and still have time to stop. If a human-driven car cuts in front in an unsafe way and the AV hits it because there isn’t room to brake, the AV isn’t responsible for the collision.
Intel didn’t create RSS to cast blame on imperfect human drivers, but it might help to address safety concerns about AVs, Galves said. A key goal of the effort is to establish a consensus among standards bodies, vendors and regulators on what it considers a safe self-driving system is.
That’s a tall order, according to Timothy Carone, a University of Notre Dame professor who is an expert in autonomous systems.
“Using this framework will help out, but it’s not a silver bullet,” Carone told The Connected Car. Safe driving can’t be reduced to mathematical formulas, he said. “It’s not a closed system. There are too many random things that can happen.”
Those can include changing weather conditions, road work and challenges like making room on a blocked two-lane bridge for an emergency vehicle to get to the scene of an accident.
But RSS might be useful as a safety benchmark, such as in standard certification tests of AVs on a closed track, he said.
Other companies are likely to propose other frameworks, so there may be compromises ahead, Carone said.
Intel says it wants to spark conversations in the industry to help advance safety. It might take a lot less time than putting AVs on the road to learn as they go and establish a track record of safety, Intel’s Galves said.
“Driving in AV mode for billions of miles is probably not the smartest idea,” Galves said. “If there’s a better way of validating these systems without having to do that, let’s discuss.”
— Stephen Lawson is a freelance writer based in San Francisco. Follow him on Twitter @sdlawsonmedia.