AI Still Betting on Perfect Human Machine Marriage

When we read the words “artificial intelligence” (AI) we usually picture robots, machines, and circuit boards, not people.

However, some working to develop AI solutions for cars aim to integrate the human element in such systems. That’s rather counter-intuitive in a field that has historically aimed to replace human participation, not necessarily to compliment it. Numerous important players in vehicle technology are taking steps in this direction. One massive name among these is Toyota, whose Toyota Research Institute is busy incorporating into its ADAS system Guardian numerous principles from a related discipline.

“When the field of AI was born in the 1950s, a few other scientists flipped the AI idea around,” said TRI’s mobility and advanced technology communications manager Nathan Kokes. “This other field was Intelligence Amplification, or IA. The idea of IA was human-centric, for people and machines to act in synergy; accomplishing together something better than either one could accomplish alone.”

Toyota’s goal is to have a human and machine interface as seamlessly as possible to improve the performance of the vehicle. The company frequently cites the examples of jet fighter airplanes, in which the pilot does not actually fly the craft directly. In place of that, a flight control system “reads” the pilot’s actions, correcting them as necessary to ensure that the aircraft stays within reasonable safety parameters.

For cars, this means that the human is in control of the vehicle in nearly all situations within the driving experience. However, as TRI’s CEO Gill Pratt said in remarks at the 2019 International Consumer  Electronics Show, “as the driver begins to reach the edge of a dynamically changing safety envelope, the machine begins to collaborate with the human driver, nudging the driver back into a safe corridor”.

The company uses a lot of this language of cooperation when describing Guardian’s IA functionalities. It would like to create a system in which there are no hard, defined borders between the artificial and organic intelligence operating the vehicle. Rather, the two work together as “team-mates”.

In many ways, this is necessitated by the current level of technology that is currently on the road, or about to drive off the factory floor. We’re still basically in Level 2 in the Society of Automotive Engineers’ famous 0 to 5 levels of autonomy scale, meaning that automation – and by extension, the application of artificial intelligence – is only partial and piecemeal. A driving experience is still make-or-break on the sack of flesh and bone piloting the vehicle, although certain emergency and other functionalities can be taken over by the system when needed, such as automatic emergency braking (AEB) to name just one.

Another prominent company in the assisted/autonomous driving space working with AI is Mobileye. Although it’s a TRI rival in many ways, interestingly it has a similar approach in fusing human and AI; it even employs similar phrases to get the point across. As with any respectable entity pushing towards ultimate vehicle autonomy, Mobileye aims to get to Vision Zero, no matter how improbable a world without car crashes might appear.

To get there, though, the driver becomes part of the solution rather than an element replaced by a “higher” intelligence imposing certain behaviors. “The idea of the system is, rather than intervening at the very last moment like AEB systems today are doing, to gently intervene with the driver’s decisions in advance, in order to prevent entering a dangerous situation in the first place,” said Mobileye’s CTO, Shai Shalev-Shwartz.

Mobileye uses braking as an example, arguing that while a solution such as AEB is effective, it is only effective as an emergency measure rather than a preventive one. In the company’s view, good preventive solutions would not only mitigate accidents but would do so without sacrificing the comfort of the driver and passengers and while keeping the general flow of traffic intact without undue delays.

The company’s “prevention” approach is to formalize a set of best driver practices in a model it terms Responsibility Sensitive Safety. In the words of a 2019 white paper written by Shalev-Shwartz and two colleagues on Mobileye’s ambitions in accident prevention “RSS formalizes an interpretation of reasonable human common sense. RSS is a rigorous mathematical model formalizing an interpretation of a set of principles of reason”.

In other words, it’s a model that imposes human preferences and habits on a machine-based solution, rather than the other way around. As with Toyota, the idea is to merge the two without showing any of the joins. Note how Shalev-Shwartz uses the term “gently intervene” to describe how the pair should work together, not “take control” or “override”.

What TRI and Mobileye are describing are higher levels of ADAS functionality than currently exist. At the moment they are closer to the horizon of full autonomy than our present position on the road. Much more capital needs to be spent on research, development and testing if such goals are to be achieved within our lifetimes.

 


Leave a comment

Your email address will not be published. Required fields are marked *