Hey Bot, what do you know about reasonable driving behaviour?

The ethical challenges of AI in a driverless future, investigated by Eric Volkman.

It’s a science-fiction trope as old as the genre itself – a sentient machine (robot, spacecraft, Skynet, etc.) seizes power, and begins to make life miserable for the humans at its mercy. A hero must be drafted in order to defeat this evil robot intelligence and save humanity.

Ever since we’ve dreamed of implementing artificial intelligence (AI) in some facet of our lives, we’ve conjured such nightmare scenarios. In order to function like a human being, a machine or piece of software has to think like a human being; it’s imperative that it make the right ethical decisions in order to serve its human clients.

The importance of ethics in AI is particularly acute when it comes to driverless cars, as TU-Automotive’s interview with Continental CEO Dr Elmar Degenhart, Robots will make fatal mistakes, Continental concedes, highlighted. After all, when hardware and software take the wheel, displacing flesh and blood operators, at some point they’ll have to make ethical decisions.

“There are many inter-related pressing ethical concerns centred around public safety,” says Jason Borenstein, director of Georgia Tech’s graduate research ethics programme and associate director of its Centre for Ethics and Technology. “For example, how will passengers interact with the AI system controlling a car and will the designer of the AI be able to predict their behaviour reliably?  What happens if passengers try to take over the car’s operation when they should not or fail to take over when they are supposed to? Will the car be able to detect the various different entities that it will confront on the roads (children, bicyclists, small animals, stationary objects, etc.) especially in difficult weather conditions?”

Two ethical dilemmas often cited by those involved in vehicle tech are the trolley problem and the tunnel problem. The former is a situation in which a theoretical trolley barrelling down the tracks is on course to hit five people tied to the tracks. The operator has the option of pulling a lever that will switch to a new set of tracks… but there is tied a single person who will die if this path is chosen. In the tunnel problem, a derivation of its trolley cousin conceived by philosopher Jason Millar, a car is travelling along a mountain road leading into a tunnel. A child darts in front of the tunnel, leaving the driver with two options – continue and kill the child, or swerve, causing a certain accident that would likely result in the driver’s death.

It’s not only traffic-related ethical dilemmas that will have to be addressed in the realm of automated driving. Jeffrey Gurney, an attorney who has written extensively on the challenges facing automobile AI, points out that “another interesting ethical question is the extent of privacy the passengers of autonomous vehicles should have. Can OEMs sell your shopping preferences to other companies to assist in their advertising?  For example, can my Google Car let Starbucks know that I typically stop for a coffee in the morning prior to work so that Starbucks can send me a targeted advertisement each morning?”

All of this leads to a thorny overarching question: who or what should determine the ethical standards that a self-driving car will follow? A country could go the leading-from-the-top route of Germany, which in June last year saw its ministry of transport and digital infrastructure draft a 20-point code of ethics for driverless vehicles, as part of a broader report on assisted and autonomous driving. Earlier, the US department of transportation issued a 15-item safety checklist for semi- and fully automated cars that covered some of the territory addressed in the German effort.

Ethical decisions could be left up to the manufacturers, with good old market forces dictating whether they are acceptable or not – customers would effectively decide with their wallets.

Professor Shannon Vallor, a philosopher and ethicist at Santa Clara University near Los Angeles, believes there shouldn’t be a single entity lording it over the process. “At some point I think that manufacturers and software providers will need to form an industry-government partnership that seeks shared ethical and safety standards and oversight procedures, with public input from other stakeholders, much as we see has been remarkably effective in ensuring air travel safety,” she said.

Tae Wan Kim, an assistant professor of business ethics at Carnegie Mellon university in Pennsylvania, broadly agrees. “The best is a bottom-up approach in which all stakeholders deliberate together to create voluntary compliance codes,” he said. “By doing so, industries can obtain legitimacy from governments and customers.”

“The stakes for the public are simply too high to leave these standards up to individual companies but the industry must play a central part of the process of forming the standards,” Santa Clara University’s Vallor stated. “At the end of the day it’s in their interest to have a clear regulatory path forward and a strong safety/ethics regime that can earn and sustain the public’s trust in this technology.”


Leave a comment

Your email address will not be published. Required fields are marked *