12 May 2020
Glendale, CA
Alyssa Cotrina, Beyond Limits

 

Technology can push the boundaries of what reasoning is or isn’t acceptable within the rationale of the human brain. A “system” has no inherent morals outside of those its human engineers can synthesize and program. Artificial intelligence technologies are no different, and arguably more crucial when it comes to having a discussion around the topic of moral competency within the minds of machines. When the main purpose of burgeoning tech is to emulate human reasoning, those in the process of innovating such solutions shouldn’t take the responsibility lightly when deciding which points-of-view will be represented.

 

Human nature (and nurture) dictate that we grow up forming opinions, and the more dangerous cousin of opinion, bias. However, most humans have also evolved to maintain their own personal set of ethics that go along with those opinions and biases. Machines have not achieved such a feat. Such a scenario is termed: Artificial General Intelligence, also known as AGI or Strong AI – which commonly relates to the most detrimental AI cliché you often see in sci-fi movies like The Terminator, The Matrix, Ex Machina, and more.

 

AGI refers to AI that is evolved to the point where its intelligence is equal to or better than a human’s. Fortunately, experts are not generally concerned about a scenario where our creations suddenly decide the time has come to destroy their human counterparts.

 

 

What are the Concerns?

More important issues emerge when the teams developing artificial intelligence solutions are homogenous and insular in nature. Humanity encompasses a great many perspectives and backgrounds from all over the planet with varying experiences, upbringings, and knowledge-bases. It’s vital for those teams working on AI solutions to represent that reality both amongst themselves and within the technology itself. Artificial intelligence must be as diverse and inclusive as the humans innovating it claim to be so that it represents all of us as it evolves and not just the biases of those select few with exclusive access to an elite education. Diverse collaboration amongst developers of AI (and all technology) is an imperative component.

 

Other concerns revolve around how we choose to evolve artificial intelligence and how we decide to use it. The hope is that AI will be used for beneficial purposes – to help solve the big issues that are plaguing human life on this planet. We cannot guarantee this is what humans will do with the technology. However, moves have been made in the industry to help ensure we are on the right track to reassuring that AI will do more to heal than harm. Numerous agencies, committees, coalitions, and expert groups have been formed to do exactly that. One example includes the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems that put out Ethically Aligned Design, a publication from 2016 that does a great job at outlining guiding standards when it comes to ethically developing and administering AI solutions.

 

“AI will most certainly make its way into the majority of industries,” says Meghan Sharp, Beyond Limits COO. “But what we really want to ensure is that AI is doing good for the planet. This technology is a mighty tool; let’s wield it to help solve our most important problems. Let’s use this incredibly powerful tool to address sustainability and climate change; let’s use it to more effectively determine how exactly we’re consuming our most vital resources. AI for good.”

 

Societal Impact & Human Lives at Stake

Situations where entrusting human lives (and livelihood) to artificial intelligence is of critical importance:

+  Healthcare solutions must take into account all varying patient’s predispositions, backgrounds, and genetics.

+  Autonomous/self-driving car algorithms must be built to ensure safety for the human life inside is at the forefront of every critical maneuver.

+  Financial/lender qualifying data solutions must look outside the scope of “typical” or “acceptable” demography when determining applicant approvals.

+  Educational opportunity/admission acceptance tools must not default to overgeneralized statistical data when considering student submissions from across the globe.

+  Judicial court cases/sentencing verdict systems must look past subjective statistical data that can unfairly categorize minorities into serving longer sentences.

+  Surveillance applications must not internalize gathered data too broadly or have too narrow a scope when executing observations of subjects. They also must not take liberties with how much information is collected or infringe on subjects’ privacies.

 

“I believe that AI is going to be to my children what the internet was for me; completely disruptive and change our entire world,” says Sharp. “AI will fundamentally alter the way industries operate today, as well as how we communicate and engage with one another and the world around us. It’s so critical to have the brightest and most diverse minds working on that problem together.”

 

The Transparency of XAI Can Help

Trust, accessibility and accountability are key. There is an urgency for artificial intelligence that provides an inside look into its own systems’ reasoning process. Solely machine learning and deep learning-centric solutions do not generally provide insight into their recommendations, thus making them black boxes. Beyond Limits Cognitive AI solutions break through the opaque walls of that box with a pioneering Explainable AI (XAI) glass box.

 

In this way, humans are always in the loop. People always have control over the system, with full visibility and insight into the decision-making process behind its recommendations via detailed and interactive audit trails. Human trust in AI grows when users have a sustained capability to access every step of the process, hold the system accountable, and immediately intervene if its “moral compass” starts slipping onto rocky terrain.

 

AI has no constitutional cognizance of “ethical conduct” outside of what humans can attempt to codify into the technology. This is a problematic notion at best because it’s seemingly difficult for mankind to agree on almost anything – including the five ethical principles of AI: transparency, justice and fairness, non-maleficence, responsibility, and privacy. When it comes down to conscientious policy creation, it can be a challenge to get people to align on the exact definition/meaning behind even those basic ideals. So, instead of fearing the hypothetical repercussions of artificial intelligence “taking over,” it would be a more beneficial practice to alternatively view AI as an outlet of opportunity for humans from varying backgrounds to finally come together, find common ground, and discuss how we want humanity’s greatest creation to represent us.