Why AI Needs to be Humanized

Connected and autonomous vehicles are highly reliant on artificial intelligence (AI) to operate.
This led start-up Humanising Autonomy, a UK-based developer of behavior and computer vision AI, to consider the ethics and why AI needs to be humanized. Core to this proposition is the need to bring context to deep visual autonomous in mobility. This includes connected vehicles and traffic sensors and they want to achieve automation without infringing ethical standards.
With the deployment of AI for CAVs and for the infrastructure around them, such as V2X technology, there is a need for artificial intelligence to understand human behavior with modelling to add prediction of human psychology to, for example ADAS and to the use of baseline sensors and cameras. The aim is to provide human-machine ethical context to virtually any connected video feed.
The venture caught the backing of now troubled Silicon Valley Bank’s UK subsidiary, which following the collapse of its parent has been bought out by HSBC for £1. The parent of the branch collapsed in March 2023 because of a run on the bank run – the second largest bank failure in US history. SVB’s collapse sent ripples across the world and left billions of dollars belonging to companies and investors stranded.
It’s not clear as to what the impact has been on Humanising Autonomy. However, before the collapse of the bank, Bailey Morrow, Silicon Valley’s managing director for venture and growth, said in February 2023: “This technology is critically important in helping bridge the gap between humans and machines, allowing both to interact and create a safer environment. With continued advancements in AI, we think Humanising Autonomy is well positioned to create an ethical and conscious foundation for the future of behavioral AI.”
Ethics: key topic of development
Speaking to TU-Automotive, Robert Camm, senior consultant in Frost & Sullivan’s mobility department, said that ethics has become a key topic of development and debate. “As a leader in autonomous driving legislation, Germany has been one of the first to tackle this ethical issue with regards to the development and programming of autonomous vehicles,” he comments.
Franziska Poszler, associate researcher at the Institute for Ethics in Artificial Intelligence (IEAI), and doctoral candidate Peter Löscher, chair of business ethics at the Technical University of Munich (TUM) school of social sciences and technology, add that the increasing sophistication, capabilities and autonomy of emerging technologies means that more and more individuals’ decisions or activities are being mediated or outsourced to technological systems.
Subsequently, Poszler and Löscher remark that “scholars in the field of AI ethics aim to investigate societal implications and develop responsible governance strategies and design processes to ensure that pertinent technologies remain human-centric”. Poszler adds that the development of CAVs is raising questions and ethical issuesall of which need to be proactively considered, investigated and governed. The questions include:
- What are adequate measures and necessary testing and auditing procedures to ensure technical robustness and safety of CAVs?
- How can users’ privacy be safeguarded while an increased collection and processing of data is needed for a sound operation of CAVs? What is the impact of CAVs on the environment (e.g., in terms of mobility and carbon emissions)?
- How can we ensure that CAVs make fair, non-discriminatory decisions in traffic?
- TUM is addressing these questions in its ANDRE project, and they are investigating how to integrate ethical behavior into the trajectory planning of an CAV.
Addressing AI ethics
The question of AI and CAV ethics is also being addressed in countries such as Germany and the UK. In 2016, for example, Germany established an ethics commission “to define ethical rules for development of autonomous vehicles,” says Camm. This commission involves industry stakeholders, legal professionals, and research organizations.
Camm says the ethical rules focus on the following themes: The overall purpose of automated systems; responsibility of the systems; rules for handling dilemma situations; hazardous situations; liability for damage; data privacy and use, including data business models; clarity when the system is engaged, and rules regarding handover scenarios; emergency scenarios; and education of the public regarding automated systems.
Following the establishment of the commission, the German government published an action plan. The purpose of which is to implement the recommendations into legislation and to develop standards for automated vehicles. The plan includes an amendment to the Road Traffic Act so that technological advances for automated vehicles are consistently reviewed and the regulations are updated accordingly.”
Camm adds that it includes: a review of data protection requirements and creating a balance between necessary data for safety whilst protecting driver rights; a consideration of self-learning systems to be tested to ensure they meet safety requirements for vehicle control, without resulting in full road user surveillance; work to raise public perception of these systems, make the systems accessible, however without forcing system use; and work on international standardization of automated and connected systems.
The Federal Government in Germany is continuing its development of what Camm describes as being an appropriate regulatory framework for the programming of self-driving computers. The aim of this is to ensure that enablement of principals and ethical guidelines. They include the prevention of personal injury must always take precedence over the prevention of damage to property; and any distinction between individuals based on personal features is impermissible.
Responsible innovation
In the UK, as a comparison, the UK government is developing legislation and approval processes for automated vehicles. Camm explains: “The Centre for Data Ethics and Innovation is focusing on the topic of responsible innovation. Focusing on wider use of AI. However. it also has a focus on automated driving.” He says the UK government published a report in August 2022, which considered responsible innovation in self-driving vehicles, which he says outlines the future direction of their development.
He believes that there is more to come as much development is required over the course of the next decade, a period during which there will be an increasing push to commercialize autonomous vehicle technology. So, there will be a need to ensure that systems are developed in the right way, with strict guidelines for automakers and their suppliers. There is also the ethical consideration of ensuring that systems are used in the right way and labelled according to their capabilities and limitations.
An example of this is the controversy over Tesla’s Autopilot system. Accidents have occurred with this system engaged because users have overjudged the degree of autonomy that system has. “Automated driving is a spectrum of functionality, whether its hands-off, eyes-off, highway or urban capability,” he suggests before revealing that there will be EU legislation for automated vehicles in 2024. This will seek to establish a standard across the EU. Presently, Germany is the most active country with regard to developing standards, policies and legislation for CAVs.
Infringing ethical standards
As to the question of how automation infringes on ethical standards and how human behavior modelling prevents it, Poszler doesn’t think that automation necessarily infringes ethical standards. However, she argues that there is a need during the development of AI-enabled technologies to address particular biases that could shape the processes that could manifest in the functionality of the technology: “These biases could either emerge implicitly, for example, when a technology engages in machine learning and based on this process derives own behavioral decisions, which can however be considered ‘unethical’. On the other hand, underlying biases could also be a result of the (economic) preferences that the programmer or company is guided by and embeds on purpose.”
In the case of autonomous driving, she says automakers could develop CAVs that prioritize the safety of its passengers and consumers over that of pedestrians. This choice could have highly disastrous results, and arguably, they should also be designed to prevent injury or death in the case of accident for those inside a vehicle and those outside, perhaps giving priority to the more vulnerable road users. The safety barrier should be high but certain choices could lower it with calamitous consequences.
Poszler adds: “To sum up, without assessing and adjusting the functionality of algorithms, the ethical requirement of “non-discrimination and fairness” could be infringed. I also don’t think replicating human behavior is a clear-cut or the best or the sole solution since, by doing so, we may commit to the naturalistic fallacy, meaning we could risk adopting behaviors from individuals for technologies that are not ‘ethical’ or that are even illegal.” By illegal, she refers to the breaking of traffic rules.
Humanizing AI
So why do AI, CAV and smart city ethics need to be humanized? Well at Level 4 of Autonomy, autonomous vehicles are designed to operate without human intervention. So, they will make decisions without interacting with a human mind about a wide range of decisions, such as when to brake and how much distance to maintain from other road users. Each of these decisions carry ethical dimensions, and this is particularly the case whenever there is a need to select a particular trajectory in an accident situation to minimize damaged to the vehicle, other vehicles, and any people involved.
Autonomous vehicles need to have the ability, which most of us human have, to determine the risks and the ethics of their actions. From an CAV perspective, this will be determined by legislation, algorithms, processes, industry rules and regulations that are put in place. This means that risks will need to be distributed fairly from CAVs.
Poszler reveals that the ANDRE project has adopted a combination of five ethical principles that all flow into the cost-function of an CAV. They include the prioritizing the worst off and attributing equal treatment of people, responsibility, and considerations while establishing a threshold for the maximum of acceptable risk. In this regard there is a need to consider the impact of selfish algorithms that protect the people inside a vehicle over external third parties, versus algorithms that do the opposite. With an ethical algorithm actuated, vulnerable road users such as cyclists are better protected.
She concludes: “So the project itself is helping to raise awareness of the need to integrate ethical considerations within the programming of AVs since this would increase safety levels for everyone in traffic and not just for those individuals who (are able to) own CAVs.”