Skip to content

The Case for Coding Automated Vehicles With Human Values

HANOVER, GERMANY – JUNE 20: A driver presents a Cruising Chauffeur, a hands free self-driving system designed for motorways during a media event by Continental to showcase new automotive technologies on June 20, 2017 in Hannover, Germany. The company presented new clean diesel technology, cable-less and other advances in electric car charging, smartphone technology for rental cars, driverless car advances and robotic taxi services. (Photo by Alexander Koerner/Getty Images)

Michelle Lazarus
Associate Professor, Faculty of Medicine, Nursing and Health Sciences


While fully self-driving cars are a hypothetical product of the future, some levels of autonomous vehicles (AVs) are already here.

As with other forms of AI, humans must weigh the costs and benefits of incorporating this new technology into their lives.

On the upside, AVs could support sustainable transport by reducing congestion and fossil fuels, enhance road safety, and provide accessible transport to underserved communities, including those without access to a driver’s licence.

Despite these benefits, many people remain hesitant to engage fully-automated AVs.
In one Australian study led by Monash University’s Sjaan Koppel, 42% of participants said they would “never” use an automated vehicle to transport their unaccompanied children, while only 7% said they would “definitely” use one.

Our distrust in AI seems to stem from a fear that the machine will take over and make errors or decisions misaligned with human values, as depicted in the 1983 adaptation of Stephen King’s horror film of the murderous car Christine. We fear increasingly being kept out of the loop of machines’ actions.

Trust and technology

There are six different AV levels described, with level zero being “no automation” and level five offering “full driving automation,” where humans are defined only as “passengers”.
Currently, levels zero to two are available to consumers, while level three – “conditional automation” – has some limited commercial availability. The second-highest level of automation, level four, or “high automation”, is now being tested.

AVs currently available to consumers today require drivers to monitor and override the automation as needed.

To ensure AVs don’t become Christine and develop minds of their own, AI programmers use a process called value alignment. This alignment becomes particularly important as increasingly autonomous levels of vehicles are developed and tested.

Value alignment takes place by programming AI – either explicitly, in the case of knowledge-based systems, or implicitly via “learning” for neural networks – to behave in a manner representing human goals.

For AVs, alignment would differ somewhat depending on the vehicle’s intended use and location, but would likely consider cultural values alongside local laws and governances (for example, pulling over for an ambulance).

The ‘trolley problem’

AV alignment is not a simple task. Where AV alignment gets tricky is when the vehicles encounter a real-world challenge such as the “trolley problem”.

First credited to philosopher Philippa Foot in 1967, the trolley problem has us consider human morals and ethics. Adapted for AVs, the trolley problem can help us consider to what extent AV alignment is possible.

Consider the following scenario: A fully-automated AV is heading for a crash and must act. It can swerve right to avoid five people but hit one person, or swerve left to avoid the one person but place the five in danger.

What action should the AV take? Which option is most aligned with human values?

Now consider this scenario: What if the vehicle was a level one or two AV, allowing the driver to retain control – which direction would you steer when the AV’s “warning” sounded?

What if the choice was between five adults and one child?

What if the one person was your mum or dad?

You might be relieved to know that the trolley problem was never meant to have a “correct” answer.

Read more: Your self-driving car won’t kill you – as long as research focuses on people and society, too

What this problem illustrates is that “aligning” AVs with human values is not straightforward.

Consider Google’s mishap with Gemini, in which an attempt at alignment, in this case to reduce racism and gender stereotypes through programming the large language model, resulted in misinformation and absurdity (for example, Nazi-era soldiers depicted as people of colour).

Alignment isn’t simple to achieve, and even deciding whose values and goals to align with remains challenging.

But there are upsides to the opportunity to ensure AVs align with human values.
Aligned AVs could make driving safer since, in reality, humans tend to overestimate their own driving ability. The majority of crashes are related to human error such as speeding, distraction or fatigue.

Could AVs instead help us align our own driving to be safer and more reliable? After all, technology, such as lane-keeping assist and adaptive cruise control, is already supporting us to be safer drivers in level one AVs.

Human alignment … for humans or AI?

As these vehicles’ presence on our roads increases, what’s clear is that enhancing humans’ responsible driving of AVs is increasingly important.

Our ability to make effective decisions and drive safely in collaboration with AV technology is paramount.

Concerningly, research shows humans have a tendency to over-rely on automated systems, such as AVs, and this automation bias is a hard habit to break. We tend to perceive technology as infallible.

“Death by GPS” is now a widely-used expression because of our inclination to blindly follow navigation systems – even when there is incontrovertible evidence that the technology is wrong. (You may recall the case of the tourists who drove into a bay in Queensland after trying to “drive” to North Stradbroke Island.)

What the AV trolley problem reveals is that the technology can be just as fallible as humans (maybe more so due its disembodied awareness of the world), but possibly for different reasons.

Read more: Will self-driving cars solve the problem of traffic congestion?

The dystopian scenario where AI “takes over” may not be as dramatic as we’re led to believe. What could be a greater threat to AV safety could be a quiet but very real readiness of humans to simply hand over control to the AI.

Our uncritical engagement with AI is impacting the way we think, and dulling our senses, including our sense of direction. What all this means is our driving skills are likely to suffer as we become increasingly complacent in the face of technology.

While the future may include level five AVs, the present still relies on human decision-making, and our very human capability of scepticism.

Drivers’ exposure to AV failures can counter automation bias, and when combined with demanding increased transparency in AI system decision-making processes, AVs may have the power to augment, even enhance, human-led road safety.

Originally published under Creative Commons by 360info™.

This article was first published on Monash Lens. Read the original article.

Latest