Robots Still Too ‘Dumb’ to Drive Alongside Humans

Driverless cars are stuck in the lower levels of autonomy because they cannot replicate a human’s instinct for another human’s bad behavior.
That’s the opinion of gaming digital specialist Nvidia explaining to TU-Automotive the affect of what it calls edge case scenarios that robots have yet to understand. Tim Wong, technical marketing manager at the company, said that these edge cases or “corner cases” were limiting the move from Level 2+ autonomy to Level 3 and beyond. He used an example of a car cutting into a lane on a highway in front of a human driver as an example: a human would usually be able to predict this behavior and prepare to slow down or brake hard but an autonomous vehicle would be at a loss and probably cause a collision.
“This is where it gets hard, because you think about being able to predict ‘bad’ human intent, human behaviour. That’s what AI computers need to learn and we haven’t figured that out yet,” Wong told us. “If you believe Elon Musk, Tesla will have a Level 3 autonomous vehicle on the road by the end of this year,” Wong added with a chuckle.
While cars can control themselves and observe what others are doing around them, it is currently not possible for an autonomous vehicle to predict what another human driven, something many human motorists can do drawing on many years of driving experience and observation. However, thanks to the array of sensors on autonomous vehicles, Wong said they can actually ‘see’ better than humans, especially in bad weather such as fog, snow, or rain.
“Your eyes have a dynamic range but, in the end, they’re just visual, so an IR camera is going to be much better visually than your eyes. If you’re trying to see an object 200 yards ahead in heavy fog, good luck doing that. However, radars are really good doing that, and with 360 degree sensors, they’re looking in all directions all the time. So, for example, you’re in heavy fog conditions and someone’s about to run a red light. Visually you just can’t see them but the car’s radar should be able to see them and can therefore avoid a crash.”
Furthermore, Wong said that the algorithms used in the machine learning software powering the sensors, radars and lidars was similar between Levels 2 and 3, and often these were algorithms developed for higher levels that had then been applied to lower level functions. “Level 2 is a way to test level 4. You can certainly take the same lane keeping algorithms, vehicle detection algorithms, vehicle distance algorithms, sign recognition algorithms from level 3,4, or 5 and bring them down to 2.”
Yet, for all the hardware’s advantages, the inability of the software to grasp the behavior of humans consigns full autonomous driving either to strictly monitored environments, such as dedicated autonomous-only roads or areas, or to sometime a considerable distance off when AI has reached a sufficiently sophisticated level of its own.