Winning trust of driverless technology could be its biggest road block

As you read this, at least one carmaker must be testing its new sensor-laden self-driving prototype, while many more are seeking government license for public trials or, perhaps, just chalking out a roadmap to achieve Level 4 autonomy before 2020.

There’s a scrabble between automakers in hitting the 2020 deadline, mainly because of their stiff competition from disruptors like Google, Uber and Tesla.

For optimists, the deadline is doable because technology will hardly pose any challenges ahead thanks to the declining costs of LiDAR and rapid advancements in sensor data processing.

What remains, is a daunting task to win consumers’ trust, with something that is designed to improve the traffic flow but that may have the ability to kill pedestrians on the way. As it turns out today, the majority of consumers are still wary about autonomous driving, despite a natural inclination towards ‘semi-autonomous’ or ADAS features.

A recent Nielsen study claims that 67% of American young drivers find the idea of “being driven” dangerous. At CES 2016, Volvo unveiled the results of a year-long conversation with 10,000 people across the world, wherein, 92% of the respondents expressed their preference to remain in control of the vehicle at any given time.

These findings are consistent with other available market survey results.

Consumer apprehensions in delegating driving control

Gail Gottehrer, a partner in law firm Axinn, explained: “Recognising that complex technologies and artificial intelligence are at work, drivers are apprehensive about potential glitches with those technologies and what would happen if such a glitch occurred while they were in the autonomous vehicle.”

She further added that drivers are concerned about how autonomous vehicles will handle emergencies and other unforeseen situations for which human drivers were responsible up until now. Gottehrerreferred to the ‘trolley problem’ which a well-documented brainteaser of historical importance, now finding a new life in every discussion about autonomous vehicles.

What happens if an autonomous vehicle has to choose between hitting an old man or a bunch of kids? Either way it, it will end up hitting at least one.

Realists argue that situations like this are rare and there is no decision that can be rendered ethically right, notwithstanding the capabilities of a robotic car or even a human driver. Still, it doesn’t prevent the consumers’ scepticism bar to be raised about forfeiting driving controls.

A bigger turn-off for consumers is the inability of self-driving systems to emulate social skills of actual drivers. Unlike humans who communicate through subtle eye contact, hand gestures, or slight honking in order to anticipate each other’s movement, autonomous systems aren’t programmed to interpret these forms of non-verbal communication.

Recently, Google’s self-driving prototype (a Lexus SUV) collided with a public bus when it tried to dodge a pile of sandbags. The self-driving car, and the test driver, assumed the bus would let them in, while the bus driver assumed, quite rightly, the car should wait to merge. What happened next is something that caused Google, for a change, to say: “We are responsible.”

Legal ramifications of autonomous vehicles

Who is responsible when an autonomous vehicle crashes – the automaker, the software provider or the man who is not driving? This is another overhyped argument that is baffling senators, lawyers, and automakers alike and has profound implications on the overall social acceptance of autonomous driving.

The old laws suggest that if anything goes awry, it is the operator who should be deemed liable because only he/she is responsible for its safe conduct on road. But what happens if you aren’t the operator and not behind the wheel? Or, indeed dozing with the self-driving system engaged, just what a Tesla Model S owner did recently?

As we move from partial to full autonomy, the decision making will shift from the driver reacting instinctively to the machine reacting algorithmically. With this, the liability may also be shifted to the vehicle manufacturer or a Tier-1/2 supplier it works with. 

This brings an interesting situation for both carmakers and regulatory bodies. Automakers, on one hand, are positing that drivers must always be there to assume control of the vehicle but planning a leap from Level 2 to Level 4 autonomy on the other. In short, they are selling the illusion of ‘control’ to consumers but taking more and more away.

Legislators believe that, unless automakers assume some responsibility for the crashes as does Volvo, a global legislation would remain a distant dream. What’s even worse is the situation in US, "the most progressive country in the world in autonomous driving", according to Hakan Samuelsson, Volvo’s CEO, where the state-wise inconsistent laws are further slowing down mass-roll out of this technology.

For example, the recently introduced Michigan bill permits the driving system to be considered the ‘operator’ of the vehicle; whereas in New York, a 1971 law that requires drivers to keep at least one hand on the steering wheel while driving, is still governing.

Legislation like that introduced in Michigan will provide clarity to both carmakers and consumers. “If different states have different regulations, it will complicate the process of designing autonomous vehicles and make it less likely that consumers who drive in multiple states will purchase autonomous vehicles.” Gottehrer cautioned.

With experience, comes trust

Consumers are visibly getting more comfortable with entry-level ADAS applications like smart cruise, parking assistance etc., which should be good news for automakers who are releasing these features in a piecemeal fashion.

But automakers would have to find a way to allow drivers to retain some degree of control along the way, perhaps by allowing them to customise a few aspects of the technology to their requirements. They also need to build trust among customers to the extent that they would never build a product that could be detrimental to the customers’ interest.

Finally, to instil confidence, the safety studies of autonomous technologies need to be widely publicised and be readily available to the consumers.

As for the next logical steps, Gottehrer added that automakers must show that these technologies have been extensively tested and are more effective than preventing accidents as compared to actual drivers. “This will help increase confidence in autonomous vehicle technology and reduce the fear of the unknown that currently prevents many people from considering purchasing or riding in an autonomous vehicle,” she concludes.

Trust is come-at-able, albeit fragile, that can be broken if there are more incidents like what Google and Tesla ran into. 

Leave a comment

Your email address will not be published. Required fields are marked *