Researching interpretability is important for ethical, reputational and legal reasons in which it’s necessary to figure out how an automated system made a particular decision. But the importance of interpretability depends on the specific AI application. Interpretability matters in the medical field, less so for captioning images.
Zoubin Ghahramani, Chief Scientist at Uber
Advances in Artificial Intelligence (AI) are encouraging organizations to use AI technologies for deep insights, predictive analysis, and to build autonomous systems. One of the key concerns around the adoption of AI-based technologies is the lack of the deeper understanding of how the AI-models are predicting or making decisions as many of them are considered ‘black boxes’. Organizations are shifting their focus to Explainable AI (XAI), a suite of machine learning techniques that can help understand complex AI models in a systematic manner, all without affecting the performance of the original AI models. There are multiple reasons for the interest in Explainable AI – minimize risks (reputational and financial) due to adverse decisions; compliance with regulatory requirements by having verifiability and transparency; confidently deploy AI to reduce human-dependence in sensitive areas. While not all applications require the ‘explainability’ of AI models, many of the business-critical applications require Explainable AI models to meet pertinent regulatory requirements and to build trust with customers. For example, using AI techniques in mortgage lending or people hiring decisions require some degree of transparency to ensure fairness.
Many organizations are trying to create XAI solutions. For example, the US Defense Advanced Research Projects Agency (DARPA) has set up an Explainable AI Program that ‘aims to create a suite of machine learning techniques’ that:
- Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
- Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
DARPA defines Explainable AI (XAI) as
systems with the ability to explain their rationale for decisions, characterize the strengths and weaknesses of their decision-making process, and convey an understanding of how they will behave in the future.
IBM has announced an open-source toolkit that may help interpret and explain machine learning models. It calls this toolkit ‘AI Explainability 360‘. University of Washington has developed a technique called LIME that helps explain predictions in an “interpretable and faithful manner.” Our industry peers like JPMorgan Chase, Lazard, Wells Fargo, etc. are working on exploring/building XAI capabilities. According to WSJ, AI experts at Goldman Sachs Group Inc. and Morgan Stanley say that though AI could be useful in fraud detection and algorithmic trading, there are still many limitations with the technology as it exists today.
DARPA says,
“new machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The strategy for achieving that goal is to develop new or modified machine-learning techniques that will produce more explainable models. These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end-user. Our strategy is to pursue a variety of techniques in order to generate a portfolio of methods that will provide future developers with a range of design options covering the performance-versus-explainability trade space.”