Consider a production line in which workers run heavy, potentially dangerous equipment to manufacture steel tubing. Company executives hire a team of machine learning (ML) practitioners to develop an artificial intelligence (AI) model that can assist the frontline workers in making safe decisions, with the hopes that this model will revolutionize their business by improving worker efficiency and safety. Many top computer science colleges in Nashik offer undergraduate and postgraduate programs in AI to train future engineers in this field. After an expensive development process, manufacturers unveil their complex, high-accuracy model to the production line expecting to see their investment pay off. Instead, they see extremely limited adoption by their workers. What went wrong?
This hypothetical example, adapted from a real-world case study in McKinsey’s The State of AI in 2020, demonstrates the crucial role that explainability plays in the world of AI. While the model in the example may have been safe and accurate, the target users did not trust the AI system because they did not know how it made decisions. End-users deserve to understand the underlying decision-making processes of the systems they are expected to employ, especially in high-stakes situations. Perhaps unsurprisingly, McKinsey found that improving the explainability of systems led to increased technology adoption.
Explainable artificial intelligence (XAI) is a powerful tool in answering critical how? and why? questions about AI systems and can be used to address rising ethical and legal concerns. As a result, AI researchers have identified XAI as a necessary feature of trustworthy AI, and explainability has experienced a recent surge in attention. However, despite the growing interest in XAI research and the demand for explainability across disparate domains, XAI still suffers from a number of limitations. This blog post presents an introduction to the current state of XAI, including the strengths and weaknesses of this practice.
The Basics of Explainable AI
Despite the prevalence of explainability research, exact definitions surrounding explainable AI are not yet consolidated. For the purposes of this blog post, explainable AI refers to the
set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.
This definition captures a sense of the broad range of explanation types and audiences, and acknowledges that explainability techniques can be applied to a system, as opposed to always baked in.
Leaders in academia, industry, and the government have been studying the benefits of explainability and developing algorithms to address a wide range of contexts. In the healthcare domain, for instance, researchers have identified explainability as a requirement for AI clinical decision support systems because the ability to interpret system outputs facilitates shared decision-making between medical professionals and patients and provides much-needed system transparency. In finance, explanations of AI systems are used to meet regulatory requirements and equip analysts with the information needed to audit high-risk decisions.
Explanations can vary greatly in form based on context and intent. Figure 1 below shows both human-language and heat-map explanations of model actions. The ML model used below can detect hip fractures using frontal pelvic x-rays and is designed for use by doctors. The Original report presents a “ground-truth” report from a doctor based on the x-ray on the far left. The Generated report consists of an explanation of the model’s diagnosis and a heat-map showing regions of the x-ray that impacted the decision. The Generated report provides doctors with an explanation of the model’s diagnosis that can be easily understood and vetted.
Techniques for creating explainable AI have been developed and applied across all steps of the ML lifecycle. Methods exist for analyzing the data used to develop models (pre-modeling), incorporating interpretability into the architecture of a system (explainable modeling), and producing post-hoc explanations of system behavior (post-modeling).
Conclusion
AI is fast becoming a massive reality in today’s world with ample innovation in the field. Explainable AI is an extension of this trend, and will play a vital role in how we use AI in the future. Some of the best computer engineering colleges in Maharashtra are conducting relevant research in this field to boost innovation. We are still at a very nascent stage of understanding explainable AI, we are sure to witness many more applications of this technology in diverse fields in coming years.