William Webster

written by

William Webster

Researcher, Avoncourt Partners GmbH

Culture Blog - Mar 11, 2018

Can we trust AI decision-making?

In May 2016, Nvidia’s deep learning tech taught their self-driving car NVIDIA DRIVE PX  to drive itself through the public without being noticed. It stopped at red lights, used a turn signal, and reacted well to unexpected situations, showing solid decision-making. It observed human driving and taught itself the rules of driving, using algorithms that humans can’t understand. Such robots are capable of assessing a large number of variables independently of each other and “learning” which ones are important to the overall task it is trying to achieve.

A vehicle running self driving mode and a woman driver reading book.

Amazing decision-making by a machine, right?

Unless these are self-driving battle tanks or fighter jets which can wreak destruction on civilians. Or if the self-driving cars simply stop at green lights or drive themselves into trees.

AI algorithms hard to follow

Engineers are having a tough time understanding the highly complex algorithms behind AI decision-making. And what we don’t know, we should be cautious to trust all too soon. This is why the public is still so skeptical.

Knowing AI’s reasoning is indispensable if the technology is to become a common and useful part of our lives. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees the inability to explain AI thinking as the bottleneck in the relationship between humans and intelligent machines. If we can better understand AI robots, then we can know why they recommend and do what they recommend and do. “It’s going to introduce trust,” he says.

Robots needing human trust

Robot and Business Man Shaking Hands

There are many things humans do that are inexplicable to themselves. We cannot explain human behavior in every detail. It may also not be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Jeff Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.” Yet it is important that AI robots’ decision-making be consistent with our ethical judgments.

Daniel Dennett, philosopher and cognitive scientist at Tufts University, remarked, “I think by all means if we’re going to use these things (AI robots) and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. “If it can’t do better than us at explaining what it’s doing, then don’t trust it.”

A way to gain trust in AI

Still, there is another technology in development that may help understanding AI decision-making processes: Blockchain. Until technology is further developed, AI robotic decision-making needs to undergo human auditing. This is a lot of data and work, therefore there needs to be something that can better aid us in tracking why certain decisions are made by a robot. Using blockchain, if decisions are recorded, on a datapoint-by-datapoint basis, on a blockchain, it makes it far simpler for them to be audited, with the confidence that the record has not been tampered with between the information being recorded and the start of the audit process.

No matter how clearly we can see that AI offers huge advantages to people, if it isn’t trusted by the public, then its success and usefulness will ultimately be limited. Recording the decision-making process on a blockchain could be a step towards achieving the level of transparency and insight into robot minds that we need in order to gain public trust.