Ford Seeks a Universal Language for Driverless Vehicles

Ford Motor is calling on autonomous vehicle manufacturers and developers to share ideas for what could become a standard way for driverless cars to communicate with other road users.

The company is expanding its testing of a light bar placed above an AV’s windshield that signals whether the car is driving, yielding or beginning to accelerate from a stop. The light bars will be added to development vehicles that Ford will operate with partner Argo AI in Miami-Dade County in Florida, and they will also be tested in Europe, a company executive wrote in an October 2 Medium post.

But Ford also wants to bring the rest of the AV industry into the conversation over how self-driving cars should communicate what they’re doing. It invited developers to share ideas to create a global industry standard – one that might, or might not, look like Ford’s own solution.

Testing of would-be driverless cars has brought to light how much human communication it takes to peacefully share space on the road. Drivers and pedestrians, cyclists, scooter riders and other road users use eye contact and gestures all the time to know when it’s safe to cross the street or continue through an intersection. To prevent accidents or awkward stalemates at every cross-street, AVs need a way to communicate intent.

Anything that increases communication between pedestrians and early AV prototypes might also help to narrow the trust gap that companies face as they try to win acceptance of driverless cars. Surveys have shown consumers are wary about the safety of AVs, and in some cases growing more wary.

Ford isn’t the only company working on this. Drive.ai, a startup testing self-driving vans in Frisco, Texas, puts large LED screens on all sides of its vehicles to display messages such as “Crossing” and “Waiting.” Jaguar Land Rover has given an AV giant eyes that can turn toward a pedestrian to show it has seen them.

The light bar that Ford has been testing is simpler than words on screens and might be easier to implement than robotic eyes on every vehicle. It uses just three lighting patterns: a steady light to show that the car is continuing forward, a slow scrolling back and forth to show that it’s stopped and yielding to others on the road, and rapid flashes to show that it’s starting to go again.

In real-world and virtual-reality testing in the US, Ford found that pedestrians figured out what all three signals meant after seeing them just a few times. The upcoming tests in Europe will help to determine whether the signals are just as understandable to pedestrians from other cultures, wrote John Shutko, Ford Human Factors Specialist for Self-Driving Vehicles.

The company is also asking anyone committed to deploying Level 4 AVs – ones that can perform all driving tasks under certain conditions or in certain areas – to help develop an industry standard for communicating intent. Through a memorandum of understanding, they can get access to what Ford has learned so far, the company said. It’s already shared the scenarios used for its virtual-reality tests with some companies and universities.

The work is going on in parallel with efforts by the Society of Automotive Engineers (SAE) and the International Organization for Standardization (ISO), Ford said. SAE is the group that defines the levels of automated driving, from Level 0 (no automation) to Level 5 (fully automated driving anywhere, under any conditions).

While governments and formal standards organizations are working on definitions and rules for self-driving cars, some automakers and technology companies are trying to build consensus among themselves in advance. Volkswagen is reportedly talking with more than a dozen companies to reach an agreement on using the same collection of sensors and software for AVs, with the aim of sharing liability in case their vehicles get into accidents.

— Stephen Lawson is a freelance writer based in San Francisco. Follow him on Twitter @sdlawsonmedia.


Leave a comment

Your email address will not be published. Required fields are marked *