Trying to Apply Ethics to Driverless Technology

Nearly a year ago the European Commission published a report on the ethics of data and artificial intelligence in connected and autonomous vehicles (CAVs).
Part of the discussion in the text is about data and artificial intelligence (AI) ethics to provide some insight into the data protection issues automakers are expected to overcome. According to Jamie Witton, associate at law firm Bristows, the report’s recommendations address the implications of the technology already in use by CAVs on the road today. This should enable vehicle manufacturers to audit their current practices.
“The report’s aim is to provide a set of ethical considerations to facilitate the transition to automated driving in all its guises – the five main levels of automation,” explains Witton who says most of the problems uncovered by the report are well established. He claims its perspective takes a responsible and innovation approach, which is differentiated from the solutionist approach of technology.
In other words, the purpose of technology is to solve a problem, which can lead policymakers to thinking that they don’t need to do anything. He explains: “For example, safety dilemmas are addressed, in Chapter 1 and the report’s recommendation is to ensure that these dilemmas are managed by principles of risk distribution and shared ethical principles that have evolved over time in pursuit of safer roads, as opposed to proposing a single governing moral standard.”
For this reason, the report involves experts from different areas from academics and policymakers to automakers. It also focuses on three short- to medium-term aims. “It focuses on the management of road safety, risk and dilemmas, data and algorithm ethics, and then on the fair allocation of responsibility and liability,” he reveals.
Understand the implications
Einar Michaelsen, head of AI and emerging technologies at KPMG underlines that it’s important to understand the implications of autonomous vehicles on the roads: “They will have to make decisions about whether they might need to crash into one thing or another. There are issues about data protection. Once you start talking about AI and personal data, there is a need for ethics. The car and automotive industry are no exception.”
One driver for setting ethical and safety standards for CAVs created by public opinion. Among the general public, there is a reluctance to move toward autonomous vehicles despite claims that they should be safer than a human-driven vehicle. There have been some widely published accidents, including those involving some Tesla cars, and they often gain much public attention. While not common, they may unnecessarily heighten people’s fears about letting a machine drive them from A-to-B.
Michaelsen comments: “There is much unchartered territory for policymakers and OEMs but we are working towards zero fatalities and zero emissions and it would be hard to get there without autonomous vehicles. There is a need for a code of ethics around responsibility and liability. This includes protecting the identity of pedestrians who are captured on the cameras of the cars.”
Questions: wider ethics
Witton believes it’s all about the wider ethics and not just a case of moral authority. So, as well as the report being an academic discussion, it considers societal needs and preferences before considering policies and legal frameworks. He reveals the questions it raises include:
- Are there dilemmas that current ethics can’t find a satisfactory solution?
- What are the requirements in terms of safety, human dignity, personal freedom of choice and then data protection? They have to be fulfilled before approving and automated system.
- How should responsibility be distributed within new and complex networks of software and technical infrastructure?
As well as talking about the accidents that have occurred, Witton highlights concerns about the mass harvesting of vehicle data such as location data and reveals that many of these problems have found their way to government departments. “Some of those departments are starting to consider what changes need to be made to road use as a result,” he claims before stressing that there is growing awareness of the need to set artificial intelligence’s moral rules.
This will require new regulations and so the European Union is aiming to set standards for the use of AI technology and so he thinks that CAVs could soon be caught by this. He adds: “Technical discussions are also constantly ahead of where the legislation is. There is a need for these to grow together, according to the report.”
Different perspectives
Michaelsen thinks you can look at AI and data ethics for CAVs from a number of different angles. The key question is around consent. There is a need to consider how consent can be granted for the use of personal data including location data and whether carmakers and the wider CAV ecosystem can aggregate data to use it as what he calls a higher level.
He explains: “Personal data has a clear need for consent but there are also grey areas by collating data that has a public interest. Does this require the data protection lens? The other part is the ethics around processing images, which could be resolved by edge processing to allow the results of the images to be sent and not the images themselves.” These include photos of pedestrians from vehicle cameras. New types of personal data will also be generated by the car, using sensors and cameras. “With this comes the need to protect individual’s privacy”, he emphasizes.
Who gives consent?
Consent management must consider who gives consent. “In the report, there is talk about giving consent once and whether consent is prerequisite of using CAVs,” says Michaelsen who adds: “They have started to work on answering these questions and, at the moment, there are a number of different data streams from the vehicles, which are sent to the OEMs’ back-office server to improve the vehicle.” This data can be used for safety purposes, including vehicle-generated traffic information, and then there is pay-as-you-drive insurance. Each use case will require its own consent policy as there is no one-size-fits-all policy. The automakers need to find a solution that’s not tiresome for users as they would still need to give them consent, and there is a need for transparency to know what consent has been given and what data will be used.”
Transparency
Witton adds that the collation and processing of personal data must be lawful, fair and transparent in accordance with data protection law. He adds that the black box nature of the algorithms that control the AI of the CAVs and the need to explain how AI works to CAV users presents “an obvious challenge”. This raises the accountability principle of data protection law. Manufacturers will therefore need to explain in basic terms how the AI works, both at a technical and layman’s level. He argues that the AI must be capable of being audited, too, because without accountability, there is no trust.
He thinks automakers would be wise to check the report’s recommendations because it addresses are currently in use. Manufacturers may have had their own interpretation of the law but they may not have to change their interpretations following the publication of the report by European Commission. It will inevitably change the market interpretation following this new guidance. “Manufacturers are traditionally keen to gather as much data from their vehicles as possible, so would look away from consent wherever possible,” he reveals.
However, he stresses that there is a legal obligation under data protection law on developers of CAVs and their software to build in privacy to the product by design and default. He therefore concludes: “The standard to which these developers are held is often interpreted through guidance and public opinion, as opposed to the law itself.”
This will necessitate the automotive industry and the wider CAV ecosystem working together and with policymakers, noting that collaboration creates its own questions about the ethics of data sharing and about how they are going to ensure transparency in the algorithmic decision-making. Ultimately, the main goal is to improve road safety. The protection of people’s privacy is at best likely to be a secondary concern, albeit still a vital one.