Waymo Set for Public Driverless Ride Service This Year

Waymo’s driverless ride service in Arizona is still a go for this year, thanks in large part to Google machine-learning technology, CEO John Krafcik said in a keynote presentation at the Google I/O event.

The autonomous driving company spun off from Google in 2016 has used its founding company’s AI advancements to make cars better able to predict what other vehicles will do and more reliably detect pedestrians, Krafcik said as he took the stage during the opening session of this week’s massive three-day developer event.

The Waymo Driverless Transportation Service will kick off in the Phoenix area this year and will be open to the general public, Krafcik said. The company has made this pledge before, and has been giving free rides to selected consumers since last year, but it’s hard to count on predictions in the volatile world of autonomous vehicles.

Having Google as a partner in what is now parent company Alphabet has given Waymo an edge in getting cars on the road, Krafcik said.

For example, in 2013, the Google self-driving group that later became Waymo was looking for a breakthrough in pedestrian detection. It started working with the Google Brain team, which was developing a form of machine learning, and applied that to the problem. Within months, its pedestrian detection was 100 times more accurate, Krafcik said. Now, Waymo’s cars can identify people of all sizes, shapes and postures on a street, including workers getting out of manholes or carrying large boards across the street, he said.

Pedestrian detection was a timely subject on Tuesday, following a news report on rival Uber’s probe into a fatal crash involving one of its self-driving cars. It’s linked to a tricky problem in autonomous driving: ensuring a car responds correctly to real dangers while preventing it from stopping every time an object crosses its path.

Uber has reached a preliminary conclusion that the Uber SUV that struck and killed a pedestrian in Arizona on March 18 probably saw her but didn’t respond correctly, according to a story in The Information.

The accident in Tempe probably wasn’t caused by a failure of the vehicle’s perception system, according to the story, which cited unnamed sources familiar with the probe. Investigations by Uber and the National Transportation Safety Board are continuing, and Uber has said it is cooperating with the NTSB.

The crash, which claimed the life of Elaine Herzberg, 49, was believed to be the first fatal accident involving a prototype autonomous vehicle. Herzberg was walking her bike at night across a wide, lit street when the Volvo XC90 SUV struck her at almost 40mph. The incident shook the AV industry, led to calls for tighter regulation and caused Uber and others, including Toyota and Nvidia, to halt real-world testing.

Autonomous vehicles and advanced driver assistance systems use a variety of sensors to detect what’s around the car, sensor fusion software to combine inputs from all those systems, and decision-making software to respond to what’s detected. Uber found that the software may have chosen the wrong response because of the way it was programmed to ignore false positives, according to The Information.

False positives can be a problem for AVs meant to carry riders, especially consumers, because in some cases they can cause cars to brake too often and lead to a choppy ride. The better a car can accurately tell a pedestrian from something it doesn’t have to stop for, the smoother and more natural the ride can be.

At Google I/O, Waymo also touted its progress against another major challenge for AVs: dealing with snow. Falling snow often inundates sensors with tiny objects to detect, but Waymo’s software can filter out that data while will identifying important objects like cars, said Waymo CTO Dmitri Dolgov.

Leave a comment

Your email address will not be published. Required fields are marked *