One of the challenges of using artificial intelligence solutions in the enterprise is that the technology operates in what is commonly referred to as a black box. Often, artificial intelligence (AI) applications employ neural networks that produce results using algorithms with a complexity level that only computers can make sense of. In other instances, AI vendors will not reveal how their AI works. In either case, when conventional AI produces a decision, human end users don’t know how it arrived at its conclusions.
This black box can pose a significant obstacle. Even though a computer is processing the information, and the computer is making a recommendation, the computer does not have the final say. That responsibility falls on a human decision maker, and this person is held responsible for any negative consequences. In many current use cases of AI, this isn’t a major concern, as the potential fallout from a “wrong” decision is likely very low.
However, as AI applications have expanded, machines are being tasked with making decisions where millions of dollars — or even human health and safety — are on the line. In highly regulated, high-risk/high-value industries, there’s simply too much at stake to trust the decisions of a machine at face value, with no understanding of a machine’s reasoning or the potential risks associated with a machine’s recommendations. These enterprises are increasingly demanding explainable AI (XAI).
The AI industry has taken notice. XAI was the subject of a symposium at the 2017 Conference on Neural Information Processing Systems (NIPS), and DARPA has invested in a research project to explore explainability.
Cognitive, bio-inspired AI solutions that employ human-like reasoning and problem-solving let users look inside the black box. In contrast to conventional AI approaches, cognitive AI solutions pursue knowledge using symbolic logic on top of numerical data processing techniques like machine learning, neural networks and deep learning.
The neural networks employed by conventional AI must be trained on data, but they don’t have to understand it the way humans do. They “see” data as a series of numbers, label those numbers based on how they were trained and solve problems using pattern recognition. When presented with data, a neural net asks itself if it has seen it before and, if so, how it was labeled it previously.
In contrast, cognitive AI is based on concepts. A concept can be described at the strict relational level, or natural language components can be added that allow the AI to explain itself. A cognitive AI says to itself: “I have been educated to understand this kind of problem. You’re presenting me with a set of features, so I need to manipulate those features relative to my education.”
Cognitive systems do not do away with neural nets, but they do interpret the outputs of neural nets and provide a narrative annotation. Decisions made by cognitive AI are delivered in clear audit trails that can be understood by humans and queried for more detail. These audit trails explain the reasoning behind the AI’s recommendations, along with the evidence, risk, confidence and uncertainty.
Depending on who requires the explanation, explainability can mean different things to different people. Generally speaking, however, if the stakes are high, then more explainability is required. Explanations can be very detailed, showing the individual pieces of data and decision points used to derive the answer. Explainability could also refer to a system that writes summary reports for the end user. A robust cognitive AI system can automatically adjust the depth and detail of its explanations based on who is viewing the information and on the context of how the information will be used.
In most cases, the easiest way for humans to visualize decision processes is by the use of decision trees, with the top of the tree containing the least amount of information and the bottom containing the most. With this in mind, explainability can generally be categorized as either top-down or bottom-up.
The top-down approach is for end users who are not interested in the nitty-gritty details; they just want to know if an answer is correct or not. A cognitive AI might generate a prediction of what the equipment will produce in its current condition. More technical users can then look at the detail, determine the cause of the issue and then hand it off to engineers to fix. The bottom-up approach is useful to the engineers who must diagnose and fix the problem. These users can query the cognitive AI to go all the way to the bottom of the decision tree and look at the details that explain the AI’s conclusion at the top.
Explainable AI begins with people. AI engineers can work with subject matter experts and learn about their domains, studying their work from an algorithm/process/detective perspective. What the engineers learn is encoded into a knowledge base that enables the cognitive AI to verify its recommendations and explain its reasoning in a way that humans can understand.
A cognitive AI is future-proof. Although governments have been slow to regulate AI, legislatures are catching up. The European Union’s General Data Protection Regulation (GDPR), a data governance and privacy law that went into effect this past May, grants consumers the right to know when automated decisions are being made about them, the right to have these decisions explained and the right to opt out of automated decision-making completely. Enterprises that adopt XAI now will be prepared for future compliance mandates.
AI is not supposed to replace human decision making; it is supposed to help humans make better decisions. If people do not trust the decision-making capabilities of an AI system, these systems will never achieve wide adoption. For humans to trust AI, systems must not lock all of their secrets inside a black box. XAI provides that explanation.