Smashing Through Consumer Barriers to Driverless Cars

Ways to get consumers prepared for bad autonomous news, explored by Andrew Williams.

As the autonomous vehicle sector gathers pace, developers around the world are becoming increasingly aware of the need for effective communication with consumers, especially with such concerns becoming increasingly salient in the light of sensational or ‘fake’ news relating to the sector.  So, what are the best strategies for autonomous vehicle companies to adopt in order to proactively and clearly communicate the benefits of autonomy?

Emphasizing the human role

Berkeley Dietvorst, assistant professor of marketing at the University of Chicago Booth School of Business, said emphasizing the human’s role in the car might be a very helpful strategy for companies in this context.  In this light, instead of communicating the fact that autonomous cars completely drive themselves, he believes companies might be better off telling consumers that the car will “enhance their ability to drive safely”, an assessment he thinks is accurate for autonomous vehicle levels one through three, where a human driver is required.

“Even for level four cars, companies could express that the consumer still has control over the car if they want it – and if this is accurate.  If consumers feel that they still have control over the car but they can delegate that control, they might be a lot more comfortable and willing to try it,” he says. “In other words, I don’t think that consumers need complete control over a car in order to feel comfortable.  For example, they have accepted automatic transmissions and automatic breaking but some people might need the ability to take control when they want to in order to feel comfortable – at least at first.”

A ship without a rudder?

For Bertram Malle, professor in the department of cognitive, linguistic and psychological sciences at Brown University, the best way to understand if the distinction between the variety of terms for autonomous vehicles is important in this context would be to carry out an empirical study presenting a fictitious report to people about the success or failure of what is labelled either an autonomous, self-driving car or driverless car and assess people’s impressions and evaluations.

“My hunch would be that ‘autonomous’ has some negative connotations relating to independent decisions and not [being] responsive to others, and so does ‘driverless’ – perhaps meaning something is missing or it is rudderless,” he says.

More generally speaking, Malle also say we know that human perception has not evolved to process broad statistics and probabilities, with the mind, instead, reacting very strongly to individual cases. In recognition of this fact he says that, rather than promise abstract accident reductions, companies  would be better served by seeking to counteract the inevitable cases of AV failure with cases of tangible AV success.

“For example, with cases like the disabled adult who now can take a job a little further away or the child in a rural area that can go to her school of choice,” he says. “It is worth noting that all of them sound like they have no place for the human, or do not invite human intervention.  Coming up with an alternative word that suggests human-machine collaboration might be helpful,” adds Dietvorst.

Building ‘calibrated trust’

One potentially useful approach that Dietvorst has noticed is the simple addition of one feature, such as automatic breaking or lane assist, at a time. “Consumers might feel more comfortable with these vehicles if they have had positive experiences with some of these features,” he adds.

When it comes to preparing for worst case scenarios, Malle again believes that empirical studies in advance of such events might help gauge what people’s responses will be. “Right now, people have a lot of uncertainty and incredulity and hear promises about a safer future but few of them have any relevant experiences.  So, people swing between over trust. For example, some Tesla drivers are people who are even afraid of engaging their auto-park function,” he says. Ultimately, he argues that calibrated trust in truly novel technology can only build on the basis of several factors, including credible information that answers the questions people have, visible success based on statistics and test runs that people can observe, personal experience, for example in an AV taxi, and noted social acceptance by others. “Humans are deeply social: the more people accept the new technology the more quickly others will follow.  I predict that early adoption and use will be relatively slow, then it will accelerate,” he adds.

Leave a comment

Your email address will not be published. Required fields are marked *