Making sense of future sensor choices

Choosing between smart of dumb sensor systems, explored by Andrew Williams.

Although the capabilities of sensor technology have grown exponentially over the last few years a lot of fine tuning remains to be done.  So, what are the key advantages and disadvantages of LiDAR, camera-based sensors, radar and stereoscopic sensors?  What are the pros and cons of smart sensors and on-board analytics and ‘dumb’ central processing sensors and is there a role for artificial intelligence (AI) in enhancing sensor fusion to create a ‘complete picture’ around the vehicle?

Key technology

For Andrew Whydell, vice-president of product planning, global systems at ZF, the main advantages of LiDAR are its range resolution, angular discrimination and ability to classify objects, offset by its high cost and “poor weather performance” in heavy rain and snow.

Elsewhere, Dr Allan Steinhardt, chief scientist at AEye, believes that LiDAR combines the best of camera and radar capabilities, possessing both the naturalness of regular camera information and also the benefit of range. “The disadvantage of LiDAR is that it’s a new technology for consumer products.  LiDAR’s legacy is in military and defence, so early consumer solutions were expensive and ill-matched to demanding automotive environmental requirements.  Fortunately, there’s a lot of innovation in this space that’s driving improved automotive solutions and lower costs,” he says.

Meanwhile, Austin Russell, founder and CEO at Luminar Technologies, argues that LiDAR is the key piece of technology that will unlock full autonomy while cameras can only provide 2D information and can be hindered by inclement weather or darkness and radar cannot sense all objects in front of it. “These limitations lead to errors, which is unacceptable when lives are on the line.  It’s easy to build an autonomous vehicle that works 99% of the time.  The true challenge is to build a system that is 100% accurate. That is only possible through high performance LiDAR, which can see the entire environment in 3D, in real time, through any conditions and at long range,” he says.

Foreign language

As far as other sensors go, Whydell singles out cameras’ ability to classify a wide range of objects, including road signs but, again, observes that they perform poorly in bad weather.  That said, in the presence of good light, Steinhardt observes that camera-based sensors generate very sharp images that are natural for the human eyeball to view. “Humans are fluent in vision, whereas the other sensors are like a foreign language that has to be translated.  In addition, 99% of all technologies for computer processing and machine learning come from camera data, so camera-based sensors are very mature in terms of the products you can get to do autonomy from the camera,” he says.

With radar, the sensor can see far away, providing immediate depth information, can see day and night, and can see in fog and rain.  The disadvantage is that the images are poor quality and generally blurry, with halo effects and smearing,” he adds.

In Steinhardt’s view, stereoscopic sensors are essentially a research effort to approximate what radar and LiDAR do, enabling depth information, while retaining the fluency that humans have with cameras. “This is a new and not commercially proven technology.  It’s what Elon Musk refers to when he says LiDAR is unnecessary,” he says.

Smart vs. dumb

For Whydell, smart sensors are good for basic ADAS applications such as AEB, lane keeping or blind spot detection, where applications are independent and can be offered separately and are the lowest cost solution when only certain areas around the vehicle need to be monitored.  However, for automated driving, he argues that a 360° view around the vehicle is required to build an environmental model to fully understand surroundings. “Sensors need to be linked, so that the car can continuously track surrounding objects, such as an overtaking vehicle.  Utilising dumb sensors with central processing makes it easier to fuse data from multiple sensors to create the environmental model,” he says.

Meanwhile, Steinhardt believes the most important advantage of smart sensors is that you can reduce reaction time, thus braking distance.  The disadvantage is if you’re not putting all of the data into a central processor you’re making decisions with limited context. “The key to smart sensors is adding intelligence, by going beyond having a set of rules to actually emulating the behaviour of natural biological systems. The big downsides [of dumb sensors] are both that there is a single point of failure and that the delay in making decisions means a slower reaction time,” he says.

For him, while smart sensors will always look for other smart sensors to collaborate with, hands down the best sensors for AVs will be optical, both active (LiDAR) and passive (camera). “These sensors will lead the way [because] they are the only sensors that leverage existing massive capital asset investments and no other sensor provides optical’s sharpness and precision,” he adds.

Role for AI

Looking ahead, Russell points out that, although we don’t have strong AI that’s as smart as humans yet, the way that computers see and process data is extremely different to the way humans see and process data. “We actually need superhuman data to be able to make up for that huge difference in terms of the way humans think versus the way computers think.  One day, maybe, we will be able to get to where we can have self-driving cars that are just as safe as humans with just cameras but we are many decades away at best,” he says.

Meanwhile, Whydell predicts that AI may one day be able to assist with classifying unusual adjacent objects or in unexpected locations, for example above the vehicle but descending, and predicting future movement of the surrounding objects to assist in path planning.

[Tele.Williams.2018.02.22


Leave a comment

Your email address will not be published. Required fields are marked *