Research Claims Driverless Tech Still too Easy to Trick

Autonomous vehicles can be easily manipulated into performing undesirable driving behavior through the placement of ordinary objects on the roadside, according to a study from the University of California, Irvine (UCI).

The research team set up a course on the UCLA campus to test the reactions of driverless cars, running the open-source AD systems Apollo and Autoware, after common objects, such as bicycles, boxes, trash bins and traffic cones, were placed on the side of the road. This caused the driverless vehicles to respond with erratic driving behavior, including sudden stops, which researchers say could potentially create a hazard or impact delivery of passengers and goods.

Furthermore, the report said it leaves AVs open to physical denial-of-service attacks (DDoS) whereby an object is placed in a vehicle’s path intentionally to cause disruption. “In general, our research suggests the designers and developers of autonomous driving systems should consider both safety and functionality and think about the better trade-off especially given the complexity of driving scenarios,” explained lead author Ziwen Wan, UCI Ph.D. student in computer science.

One example is that they found the current implementation of Apollo use a fixed lateral buffer to avoid collisions and this fixed buffer may lead to unexpected stop because of off-lane static objects if the lane is narrow. This unexpected driving behavior could have been avoided if this safety buffer is dynamically based on the velocity of the autonomous driving vehicle.

The research team found the existing open-source autonomous driving systems can be tricked to execute overly conservative driving behavior, such as sharp/permanent stop during the driving. “The attackers could only use some very common roadside objects, such as cardboard boxes,” Wan added. “In one of the demos we created in the simulation, the AD vehicle makes a sharp stop decision and thus lead to a rear-end collision.” He said they think these types of problems may damage the safety and the efficiency of the transportation, which is the original goal of autonomous driving.

From the perspective of Pedro Pacheco, AV technology analyst at Gartner, the study results don’t come as much of a surprise but he also cautioned the researchers take what they have seen for these two autonomous vehicle platforms to extrapolate the concerns to all the other AVs out there. “Extrapolation from a scientific perspective is not the best approach. You need to have a much larger sample size to extract conclusions on the other,” he said. “But I didn’t need to look at this study to come to the conclusion that widespread adoption of autonomous vehicle technology, Level 4 let’s say, is not possible yet.”

Pacheco added automakers are still in an experimental phase of development as they wrestle with the issues of how to train autonomous driving systems to operate in real world environments, where “new situations” can frequently occur. “It takes time because ideally, if the technology were mature enough, autonomous vehicles would have a level of AI, so sophisticated they would be able to generate extrapolations a little bit like a human driver would do,” he said.

To make progress training AVs to deal with these scenarios, Wan said automakers could consider generating more comprehensive testing cases or even thinking about testing with an attacker in the loop to uncover the corner cases that cannot be handled appropriately by the current autonomous driving systems. For such safety-critical systems, verification on the system behavior might be the further technology to provide guarantees on safety and functionality. “To achieve that, we need efforts to formally express the safety and necessary functionality requirements of autonomous driving in complex driving environments and also develop more advanced program verification techniques to enable the proof on such large-scale commercial systems,” he added.

He pointed out the possible driving scenarios that an autonomous driving system may face could be unlimited. “But we believe with joint efforts from different domain, this goal can be achieved in the future,” Wan said. “As mentioned, modeling the specification and doing the verification could be one possible approach to get low-risk high functionality systems. I’d also highlight some recent efforts in AI and machine learning community to develop better decision-making models.”

Malicious actors, or even pranksters, purposefully trying to throw off AVs is something automakers will have to take into consideration, Pacheco noted, pointing to past examples where reflective tape affixed to road signs has stymied self-driving vehicles. “At the same time, there are situations, including these types of tricks, that could fool a human driver as well,” he said.

Pacheco also noted the concept of malicious actors is just one of several obstacles autonomous driving systems must overcome before they’re widely deployed across cities. “First of all, autonomous vehicle technology, needs to be able to operate in a broad number of environments with zero disengagement and zero accidents,” he said. “This will be the key thing and will mark the turning point in the evolutions of autonomous vehicles.”


Leave a comment

Your email address will not be published. Required fields are marked *