14 February 2020
Originally Posted: 07 February 2020 for InsideBigData

 

In this special guest feature, AJ Abdallat, CEO of Beyond Limits, takes a look at the tech industry’s hype cycle, in particular for how it often falls short of expectations when related to AI. Beyond Limits is a full-stack Artificial Intelligence engineering company creating advanced software solutions that go beyond conventional AI. Founded in 2014, Beyond Limits is transforming proven technologies from Caltech and NASA’s Jet Propulsion Laboratory into advanced AI solutions, hardened to industrial strength, and put to work for forward-looking companies on earth.

Despite what we see in science fiction, artificial intelligence (AI) is not likely going to produce sentient machines that will take over Earth, subordinate human beings, or change the hierarchy of the planet’s food chain. Nor will it be humanity’s savior.

AI essentially equates to the ability of machines to perform tasks that usually require human reasoning. The concept of artificial intelligence has existed for more than 60 years, and modern AI systems are revolutionizing how people live and work. However, conventional AI solutions do not use the technology to its fullest potential.

 

Decisions are usually made inside “black boxes”

Conventional AI solutions operate inside “black boxes,” unable to explain or substantiate their reasoning or decisions. These solutions depend on intricate neural networks that are too complex for people to understand. Companies utilizing conventional AI approaches primarily are in somewhat of a quandary because they don’t know how or why the system produces its conclusions, and most AI firms refuse to divulge, or are unable to divulge, the inner workings of their technology.

However, these “smart” systems aren’t generally all that smart. They can process very large, complex data sets, but cannot employ human-like reasoning or problem-solving. They “see” data as a series of numbers, label those numbers based on how they were trained, and depend on recognition to solve problems. When presented with data, a conventional AI system asks itself if it has seen the information before and, if so, how it labeled that data last time. It cannot diagnose or solve problems in real-time unless it has the ability to communicate with human operators.

Scenarios do exist where AI users may not be as concerned about collecting information around reasoning because the consequences of a negative outcome are minimal, such as algorithms that recommend items based on consumers’ purchasing or viewing history. However, trusting the decisions of black box-oriented AI is extremely problematic in high-value, high-risk industries such as finance, healthcare, and energy where machines may be tasked to make recommendations on which millions of dollars, or the safety and well being of humans, hang in the balance.

 

Imperfect edge conditions complicate matters

Enterprises are increasingly deploying AI systems to monitor IoT devices in far-flung environments where humans are not always present, and internet connectivity is spotty at best; think highway cams, drones that survey farmlands, or an oil rig infrastructure in the middle of the ocean. One-quarter of organizations with established IoT strategies are also investing in AI.

 

Cognitive AI solves these problems

Cognitive AI solutions solve these issues by employing human-like problem-solving and reasoning skills that let users see inside the black box. They do not replace complex neural networks applied by conventional solutions, but instead interpret their outputs and use natural-language declarations to provide an annotated narrative that humans can understand. Cognitive AI systems understand how they solve problems and are also aware of the context that makes the information relevant. So instead of being asked to implicitly trust the conclusions of a machine, with cognitive AI, human users can actually obtain audit trails that substantiate the system’s recommendations with evidence, risk assessment, certainty, and uncertainty.

The level of “explainability” generated by an AI system is based on its use case. In general, the higher the stakes, the more explainability is needed. A robust cognitive AI system should have the autonomy to adjust the depth of its explanations based on who is viewing the information and in what context.

Audit trails in the form of decision trees are one of the most helpful methods for illustrating the cognitive AI reasoning process behind recommendations. The top of a tree represents the minimum amount of information explaining a decision-process, while the bottom denotes explanations that go into the greatest amount of detail. For this reason, explainability is classified into two categories, top-down or bottom-up.

The top-down approach is for end users who don’t require intricate details, only a ‘positive’ or ‘negative’ point of reference about whether or not an answer is correct. For example, a manager may think that a panel on a solar farm isn’t working properly and simply needs to know the status of the solar panel; a cognitive AI system could generate a prediction around how much energy the panel will generate in its current condition.

On the other hand, a bottom-up approach would be more useful for engineers dispatched to fix the problem. These users could query the cognitive AI system at any point along its decision tree and obtain detailed information and suggestions to remedy the problem.

If the ultimate expectation for AI is to live up to its promise of transforming society, human users must be comfortable with the idea of trusting machine-generated decisions. Cognitive, explainable AI makes this possible. It breaks down organizational silos and bridges gaps between IT personnel and non-technical executive decision-makers of an organization, enabling optimal effectiveness in governance, compliance, risk management and quality assurance, while improving accountability.