In Arthur Conan Doyle’s short story Silver Blaze, Sherlock Holmes is able to uncover the theft of a famous race horse by quickly grasping the significance of no one hearing the family dog barking the night of the theft. Since the dog was kept in the stables, the natural inference was that the thief must have been someone the dog knew.
This type of reasoning, which seeks the simplest and most likely explanation given a set of observations, is known as abductive reasoning, and it is the type of reasoning humans use most often. In fact, it comes so naturally to us, the conclusions so immediate and so often correct, that it’s mistaken for intuition. Abductive reasoning is useful even in the presence of incomplete or misleading details, making it ideal in real-world situations. It’s the type of reasoning that needs to be imbued in AI systems before trusted autonomy can be achieved — that is, the point where we trust AI systems to perform complex tasks that require flexibility and quick decision making, even with incomplete or inaccurate data input, and not harm us in the process.
When we hear about AI, it’s often accompanied by words like deep/machine learning and big data. The purveying weakness here is two-fold. There must be enough good data, and there must be enough (heavy and expensive) infrastructure to process all that data. It’s one thing to win a game of chess or go against a world champion, but it’s entirely another when you’re trying to land a rover on Mars. That’s the leap between an AI system that’s memorized a game with a finite set of rules and an AI system that is trusted to make quick decisions when the field of possibilities is endless and impossible to enumerate.
It’s the leap that some AI companies are attempting to make. Already, the more cutting-edge AI researchers and companies are producing bio-inspired AI platforms with human-like reasoning and problem-solving abilities. Researchers like those at the Jet Propulsion Laboratory’s (some of my company’s AI platform is based on research and technology from NASA/JPL research) space exploration program have been developing this type of AI platform for a while now. In space, where an additional ounce can determine viability and cost millions in R&D, fast, light and agile systems must sense, diagnose, predict and respond in situations with infinite unknown factors and many possible outcomes.
AI’s technology that combines inductive, deductive and abductive capabilities makes this type of human-like reasoning possible. It allows a platform to rapidly analyze a complex situation and come up with a solution much like humans would, even in the presence of missing, misleading or distracting information. What makes these AI platforms even more unique is that they can serve as a guardian angel of sorts, working in tandem with traditional AI systems to add an additional layer of supervision. An AI platform can be programmed to learn even abstract rules and concepts, akin to morality and ethics, and adhere to these rules no matter the scenario presented.
This function becomes more and more important as AI systems invariably take on more tasks in our society, often with minimal supervision. Luminaries, including Elon Musk and Steven Hawking, have expressed concern about the possible risk to humans should AI devices grow too far ahead of the users who control them. Facebook recently had to temporarily shut downone of its AI labs when chatbots began communicating in a language unintelligible to researchers.
Additionally, AI systems are at risk of hacking and corrupt or incorrect data affecting their proper functionality. Take, for example, the Germanwings pilot who locked his co-pilot out of the cockpit and used the autopilot system to crash into a mountain. An AI system with trusted autonomy needs to be sophisticated enough to potentially override such a command, even if the correct overrides are imputed. Much like a human flight control officer would immediately flag a request to deter from the set flight trajectory as suspicious, even without knowing what was going on onboard, a guardian AI should be able to analyze the situation, play out various scenarios and determine if such a command is suspicious all within seconds. And if it is suspicious, it could refuse to change to the autopilot trajectory, hopefully avoiding any potential tragedy.
AI with trusted autonomy parallels Issac Assimov’s classic Three Laws of Robotics, but is flexible enough to be modified to fit whatever industry and application it is applied to. While Terminator-level AI threats are still very much in the realm of science fiction, it’s comforting to know that the creators of the next generation of advanced AI are keeping in mind that human-like reasoning should be coupled with a directive for human life and comfort.