Allowing Consumer Intervention Could Engender AV Trust

Moves to help automakers increase consumer trust in fully autonomous vehicles have begun with the setting up of best-practice guidelines allowing passenger intervention.
Many surveys have highlighted the consensus among potential automated ride-hailing consumers that the majority would sooner trust a human driver than a robotic one. It’s this distrust of the technology that the Automated Vehicle Safety Consortium (AVSC) is attempting to address with its recommended best practice guidelines.
Speaking to TU-Automotive, AVSC director Ed Straub said gaining the consumer confidence in the technology is vital for it to succeed in fully automated vehicle environments. He said: “The main point is that we are focused on SAE Level 4 and Level 5 vehicles where the passenger has no role in supervising the vehicle performance. This best practice is for if a passenger perceives and emergency based on what they think an emergency is.”
He envisages the scenario where, at least in the near future when the technology is still in its infancy, that manufactures will allow passenger intervention through either a mechanical stop feature or through an emergency call. Straub said: “The assumption is that when the feature, either a stop or an emergency call, is initiated we are assuming the passenger has perceived an emergency and we are not making any determination or assumption of second-guessing that passenger. The point of doing that is to give riders a sense of agency so that when they are in these vehicles that may not have traditional steering wheels or pedals, they have a sense of control. Therefore, they are more likely to trust it and more likely to ride in it.”
Human control
He added that consumers will demand some ability to control the vehicle in emergency situations beyond that they currently require when riding a bus or flying in an aircraft. He explained: “We are still learning a lot about how people trust and use automated vehicles. With the examples mentioned, the difference being that if someone is riding a bus, there is another human controlling it. The passenger has an intuitive sense of what that human will react to and how they will behave. With an automated vehicle we are being asked to trust the machinery and its algorithms and we have seen in studies that people are a little apprehensive about that.
Straub recognizes that allowing intervention can expose the automated vehicle to occasional misuse with the ensuing knock-on effect on all the surround driverless vehicles which will then have to react to the vehicle’s unscheduled and human input. He said: “We have a couple of recommendations in the best practice that address misuse and abuse – misuse being the unintentional incorrect use of the technology through a lack of understanding and abuse being intentional incorrect usage. We think education will have a strong role to play in this. We think consistency with regard to the placement, design and labeling in these vehicles will be helpful in creating social norms.
“We think a lot of these vectors will develop over time. You don’t see people very often pulling fire alarms in movie theaters when they could even though there’s lots of drunk people there. You don’t see people engaging emergency stop on trains or even in elevators. So, we think that consistency in deployment will see the using public get used to these norms.”
Engendering trust
Straub said the guidelines were not intended to direct how a manufacturer should allow its vehicle to react to human intervention but to open the discussion of the need to gain the consumers’ trust. He said: “It’s important to remember these are two different aspects of engagement with emergency call and emergency stop, both of which we recommend manufacturers to install. So, if we focus on the stop function, the recommendations we made that once the ‘button’ gets pushed the vehicle comes to a complete stop. We didn’t think we could define how that stop would be carried out because of the variation with the operational design domain of the manufacturer.
“Once the vehicle is stopped, it can’t be allowed to restart until someone in the fleet operational management has to restart it. So, they would have investigated the emergency and our recommendations are for an enhanced diagnosis. This means after the emergency stop is initiated, the cabin would be illuminated and someone in the control center would look at the tape. In this way the person who made the emergency stop is not going anywhere until someone has assessed the situation. That could be one way of mitigating the situation.”
He said the humanizing of the automated vehicle experience is one way to gain the public’s trust until the technology becomes familiar enough to their daily lives. Straub explained: “The best practice is trying to engender trust with the users and recognizes that there will be a range of implementations of a feature like this. So, manufacturers may agree that a stop will not deal with this but will use a call feature so that they can have someone look at the situation before anything is allowed to happen. It will be very use-case specific like all ADAS features.
“We are talking about a human-perceived emergency, so allowing them to have a degree of control in a situation they perceive as an emergency and connect to a human will, we believe, engender trust and make people more likely to use the technology.”
— Paul Myles is a seasoned automotive journalist based in London. Follow him on Twitter @Paulmyles_