Driverless cars will have to ‘understand’ human behaviour

In the ongoing quest to develop autonomous vehicles, researchers face a number of challenges relating to object detection, sensor technology, safety, software, reliability and communication.  In seeking to overcome these challenges, many companies have turned to artificial intelligence (AI) applications.

Deep Learning

AI applications designed to assist vehicles in interpreting real-time challenges in urban environments are rapidly emerging as a key element in the development of autonomous vehicles.  In recognition of the central importance of such applications, a growing number of organisations are now actively involved in the development of AI and machine learning technology for self-driving cars, including Toyota, which has established the Toyota Research Institute and entered into an AI research partnership with Stanford University and Massachusetts Institute of Technology(MIT).

Another interesting example is NVIDIA, which has demonstrated how deep learning can be employed for autonomous vehicle navigation and shown that such learning can be used to generate a relationship between the appearance of a scene and an appropriate vehicle control input, leading to safe autonomous driving in simple driving scenarios.  The company also recently launched the DRIVE PX2 deep learning-based computing engine that will be used to power a fleet of 100 Volvo XC90 SUVs as part of the Swedish carmaker's Drive Me autonomous vehicle pilot programme in 2017.  As Praveen Chandrasekar, consulting director and research manager, North America, mobility at Frost & Sullivan, explains, DRIVE PX2 is “essentially a deep learning platform” that allows input from more than 12 cameras and other sensors in the vehicle and is capable of carrying out up to 24 trillion deep learning operations each second.

“In the autonomous vehicle space, AI is being researched to enable the vehicle to interpret challenges on the road like a human would.  The biggest use cases for AI in autonomous vehicles are pedestrian detection, vehicle detection, lane tracking and recognising traffic signs,” he adds.

Key challenges

Elsewhere, Massachusetts-based start-up nuTonomy uses a wide range of AI applications in many aspects of its autonomous vehicle software, from object detection and classification to motion planning.  According to Karl Iagnemma, its CEO, the company is also currently developing the first ever “complete solution” for operating large fleets of autonomous taxis in urban settings.  This includes software for autonomous vehicle navigation in urban environments, smartphone-based ride hailing, fleet routing and management, and controlling a vehicle remotely through tele-operation. 

"nuTonomy has differentiated itself by pioneering technology for motion planning and decision-making that is based on methods that have been successfully employed in the development of spacecraft, airplanes, and other complex, safety-critical autonomous systems,” he says.

Although confident of the long-term potential of AI applications, Iagnemma admits that many large organisations still face a number of key challenges in exploiting their potential.  Chief among these is what he calls their “lack of domain expertise in the core technology that will power the coming generation of autonomous vehicles”.

“This is highly specialised technology, for which there are relatively few experts worldwide.  Unfortunately AI will not be a magic bullet: developing a safe, reliable autonomous vehicle platform will require much more than simply applying a powerful algorithm or two,” he says.

Amnon Shashua co-founder and CTO at autonomous vehicle technology company Mobileye, which has developed AI vision algorithm-based collision avoidance technology, agrees that such “domain expertise” is likely to remain a key requirement for future success.

“Deep learning is a tool.  It is a modern tool … of the past three to four years, and it is a very effective, modern tool but at the end of the day, it's only a tool.  It's a tool among many, many components in making the system work.  And in order to reach a very high accuracy of 99.99% and so forth that you need in order to have a safe system, that requires a lot of domain expertise, a lot of data, a lot of ingenuity, a lot of innovation.  There are no shortcuts there,” he adds.

Reacting like humans

Looking ahead, Chandrasekar believes that the biggest challenge for autonomous vehicles will lie in their “seamless operation” in both urban and highway conditions and, most importantly, in all-weather performance. 

“If you are purely dependent on the on-board sensors then there is no fail-safe environment and most importantly you need the vehicle to react like humans would to different obstacles on the road, which cannot be determined just using sensors.  So AI is critical to deploy safe automated driving that can seamlessly operate in urban and highway conditions and then in fully autonomous driving, which Google is demonstrating,” he adds.

Meanwhile, Iagnemma predicts that the key trend moving forward will be the realisation that, while AI represents an extremely useful tool to address problems in autonomous driving, AI alone is not sufficient. 

“The community will evolve toward technical solutions that combine AI with other core robotics technologies to result in software systems that are flexible, powerful, and safe,” he says.

Leave a comment

Your email address will not be published. Required fields are marked *