The fatal crash of a Tesla Model X in Autopilot mode last month has renewed questions about the feature and the way some drivers may be using it.A Model X SUV crashed into a freeway lane divider near Mountain View, Calif., on March 23, killing the driver, Walter Huang, 38. At the time of the crash, Autopilot was engaged and Huang’s hands weren’t on the wheel, Tesla wrote in a blog post on Friday, March 30.
This is the second time Autopilot has come under scrutiny after a fatal accident, and the Mountain View crash came just days after a self-driving Uber vehicle struck and killed a pedestrian in Tempe, Ariz., an incident that has rocked the autonomous car industry. Accidents involving advanced driver-assistance features and driverless cars have heightened concerns about the fledgling technologies, both of which are designed to reduce collisions and road fatalities.
Among other things, Tesla’s Autopilot can brake in emergencies, steer a car by following lane markers and use adaptive cruise control to stay a certain distance behind the car ahead. It isn’t intended to be a self-driving system, and the company warns drivers to keep their hands on the wheel and their eyes on the road while using it.
Cars alert drivers whose hands are off the wheel too long.
But critics say Autopilot makes some drivers think the car can fully drive itself. Tesla notes on the top of its Autopilot website that it offers “Full Self-Driving Hardware on All Cars,” though it warns later that drivers are responsible for remaining alert and active. Some critics have said Tesla should change Autopilot’s name so drivers won’t treat it like a self-driving system. Also, Huang reportedly had complained to Tesla several times that Autopilot had caused his car to veer toward the divider where the crash took place.
“We’ve been doing a thorough search of our service records and we cannot find anything suggesting that the customer ever complained to Tesla about the performance of Autopilot,” Tesla said in a statement last week.
The March 23 crash occurred at 9:27 a.m. on Interstate 101 in Silicon Valley, according to Tesla. After originally commenting on the incident four days later, the company said it had retrieved the computer logs from inside the car and disclosed more information about what happened, drawing criticism from the National Transportation Safety Board (NTSB), the Washington Post reported. A Tesla representative declined to comment.
The logs show that Autopilot was engaged, with the adaptive cruise control’s system set to follow cars ahead by the minimum distance, Tesla said. The driver’s hands weren’t on the steering wheel in the six seconds before impact, though the car gave several visual and one audible alert to the driver to put his hands on the wheel. In the last five seconds and 150 feet before hitting the divider, the driver took no action, Tesla said.
The company said the accident might not have been as bad if a crash attenuator, a barrier in front of the concrete divider that is designed to absorb part of an impact, apparently had been crushed in a prior accident and not replaced.
But the incident refreshed memories of two other crashes involving Teslas on Autopilot. In May 2016, Joshua Brown died when his Model S ran into a truck that was crossing the road he was on in Florida. Last year, federal regulators found that Brown wasn’t watching the road but the system performed as designed. In January, a Model S ran into a fire truck stopped on a freeway in Southern California. No one was injured.
Tesla defended Autopilot in its blog post last Friday, pointing out that the National Highway Traffic Safety Administration found after investigating the 2016 crash that collisions involving Teslas had declined by 40% after the company introduced Autopilot through a software update in 2015. The system has become more reliable since that initial version was released, Tesla said.
— Stephen Lawson is a freelance writer based in San Francisco. Follow him on Twitter @sdlawsonmedia.