Source: MIL-OSI Submissions
Source: University of Canterbury
Would you trust your life to an autonomous vehicle? Do you understand how it will respond in dangerous situations? Are you willing to get in without knowing the risks?
With an increasing number of autonomous vehicles on our roads, there are questions to consider. Unlike human drivers, these machines have no moral intuition; every ‘decision’ is a calculated process.
“There’s a certain amount of risk involved when you step into an autonomous vehicle,” says Associate Professor Christoph Bartneck of the Human Interface Technology Lab New Zealand (HIT Lab NZ) in UC’s College of Engineering. “These machines are not perfect, they will fail and they will hurt people. People have already died in incidents involving autonomous vehicles as a consequence of how they were programmed.”
He explains the manufacturers of autonomous vehicles have a responsibility to clearly outline what the risks and uncertainties are when it comes to using this technology.
“Communicating risk and uncertainty is one of the most challenging science communication tasks because it’s based on advanced mathematical concepts, which people often struggle to understand,” he says.
“Doctors have to explain to patients that particular treatments have a certain probability of succeeding or failing, and that those treatments come with a certain risk of side-effects. A lot of research has been done in the medical space around communicating risk so they’re much further along than we are in Human Robot Interaction,” she says.
In their latest research, Professor Moltchanova and Associate Professor Bartneck investigated using different phrases and words to communicate risk and uncertainty surrounding autonomous vehicles.
Participants were presented with a random series of sentences including a word or phrase such as ‘probably’, ‘likely’, and ‘almost no chance,’ to describe how likely a situation was to happen. They were asked to choose a percentage of probability (using scales provided), they believed corresponded to that word.
“We wanted to see whether there was a correlation between words and numbers. For example, if we say that something is very likely, do people think there’s a 50, 70 or 90 percent probability that it will happen,” Professor Moltchanova says.
Generally, the research showed there is a reasonable correlation between percentages and words.
“At each end of the scale there was a strong correlation; participants matched ‘highly likely’ with 100 percent and ‘almost no chance’ with 0 percent. In the middle, around 60 to 80 percent, things were less clear with words and phrases being used interchangeably,” she says.
Associate Professor Bartneck says ideally both words and numbers should be used when communicating uncertainty around autonomous vehicles to make the information as accessible as possible.
He says their research also showed that people confuse the probability of an event occurring with certainty around a statement, and explains it is important to distinguish between risk and uncertainty in public communication. While risk focuses on negative events, uncertainty also includes positive outcomes.
“We need to tell people what the probability of an event happening is and how certain we are about that. These are two different things, but people struggle to separate them,” Associate Professor Bartneck says.
“For example, if I say ‘I am very certain that the car is not going to crash’ I’m saying that the probability of a crash is very low, but my certainty or confidence in this statement is very high.”
The researchers found negative and double-negative expressions of uncertainty should be avoided to help people understand the difference between probability and certainty.
They highlight that autonomous vehicles are not perfect and until the performance of an autonomous vehicle is at least as good as a human driver, those who wish to use them must be aware of, and agree to, the risks before using them.
“It will never be perfect, but we must do what we can to communicate uncertainty about technology such as autonomous vehicles as best as we can, because the consequences of not understanding are so dangerous,” Associate Professor Bartneck says.
Professor Moltchanova and Associate Professor Bartneck’s research article is available online here>