“Any sufficiently advanced technology is indistinguishable from magic” is one of three adages by the British science fiction writer Arthur C. Clarke. He wrote about the future, future societies, and how they might evolve due to technological advancement. Artificial Intelligence (AI) is fast becoming a high-tech advancement that is already reshaping multiple aspects of society, both personal and public. While we are advanced enough to not refer to AI as magic, we also often don’t truly understand how it arrived at a specific conclusion.
If you’re a fan of science fiction, you will likely be familiar with HAL 9000 from the epic film “2001: A Space Odyssey,” where Mr. Clarke explores challenges that arise when man builds machines the inner workings of which he doesn’t fully understand. In the story, HAL, an intelligent computer whose role is to maintain a spaceship while most of the crew is in suspended animation, malfunctions due to a conflict in his orders. A spaceship crew member starts to distrust HAL and eventually manages to turn him off. Although AI does not yet control spaceships traveling to other planets, it is often using a black-box concept. Effectively, the AI is fed an input and then provides an output, but what happens within the black-box is impossible to see. Black-box machine learning (ML) is a very common approach to decision making across multiple domains. However, the lack of proper explanation of black-box ML models is already causing problems in healthcare, criminal justice, recruitment, and other fields. One common example of black-box ML is deep learning.
AI commonly uses ML models to make decisions. Still, an ML model is always a simplified representation of reality and does not model reality correctly all the time. For example, HAL didn’t understand conflicting instructions. A good model interprets reality correctly “most of the time,” so you can imagine what a bad model might present. As part of ML model evaluation, we want to know how often we correctly identify a specific occurrence. That percentage will determine whether the model is good or bad. Even excellent models may not accurately identify everything. Which leads to these questions – What if your problem is part of a set of incorrect identifications? How will you know, and if you don’t know, how will you be able to override the AI’s decision?
Explainable AI (XAI) proposes that any ML model be explainable and offer its decision-making interpretation. XAI also promotes the use of simpler models that are inherently interpretable. We realize that keeping the human in the loop is important when building ML/AI applications, so we decided to build a CoPilot around the concept of XAI for our cloud-based networking management platform, ExtremeCloud IQ.
In our space, this means that any ML/ AI that helps with network troubleshooting or makes recommendations regarding your enterprise network should offer clear evidence supporting the decision. As the domain expert, a network administrator can override those decisions for any mission-critical environment. When ML/AI decisions are auditable, it removes ambiguity when it comes to accountability and helps build trust between the network administrator and the ML/AI.
Every enterprise network needs a copilot. ExtremeCloud™ IQ CoPilot provides explainable ML/AI builds trust by providing readable output of how insights were derived, enabling you to automate operations, enhance security and enrich user experiences with confidence. Please travel to this URL to learn more: https://extremeengldev.wpengine.com/latam/copilot/.