Artificial intelligence (AI) has become the “hot topic” of computer science in recent years. Created as a way to mimic human intelligence, the concept has blossomed into a massive interdisciplinary field with several different methods and usage. Today, AI is commonly used in platforms such as PyTorch or TensorFlow with mathematical and machine learning knowledge in order to use data to predict something. This may be broad, but AI has been used in everything from determining bias in social media posts to monitoring the vital signs of babies in utero [1]. The field also has a great societal impact, as the global AI economy is currently worth 62.35 billion USD and projected to grow at a rate of 40% each year [2].
One large complaint within AI is the “black box” nature of it. What this refers to is an artificial intelligence system whose inputs and operations are invisible to the user. While this is beneficial for complex AI operations, it makes it impossible for users or data scientists to interpret the process for which the system undergoes. Most commonly, this is used in neural net AI models, where one input is passed through many hidden layers before giving an output. These hidden layers can conduct several mathematical operations to produce the output, but studying its structure won’t give you any insights on the structure of the function being approximated. In most cases, programmers are not sure how the models generated the answers, even if they are sure of the answers themselves [3]. However, there are currently many privacy concerns and laws that require transparency in how a user’s data is being used, these black box procedures no longer suffice [4]. This is where explainable AI comes into the picture.
Explainable artificial intelligence (XAI) is described to be a set of operations that allows people to understand the results that are produced by machine learning algorithms [5]. XAI helps characterize model accuracy, fairness, transparency, and outcomes in AI-powered decision-making. Many online services claim to provide the platform for creating XAI, with tech giants such as Google and IBM getting involved in this discussion. The goal of these companies is to make the “black box” problem of traditional AI more transparent and give users the power to understand the algorithm and its results. As a consequence, this can help reduce bias and discrimination that is typically seen in current AI.
So how does XAI operate exactly? What does it mean to be “explainable?” Essentially, the model should not only give predictions but also some explanation on why the predictions were chosen [5]. Several ideas have been proposed on how to best tackle this issue. One method is to include visualization of the model in order to depict discriminative features and understand the data being fed. Another method is to understand the model by visualizing the neural net layers, the source of ambiguity in most AI models. A third method is to focus on understanding user psychology and behavior by including behavior models alongside statistical learning to generate appropriate explanations along the way [5]. These methods can also be combined and provided simultaneously, but there is widespread agreement that post-prediction justification is not enough for explainability. The goals of XAI should be embedded into the AI Model in the core design stage rather than attached at the end.
As previously mentioned, some services give people the proper toolkit to create explainable AI models. One such platform that offers this service is Arthur AI. Their platform offers users the ability to track their models in a single pane, explore their models and understand decisions, and quickly diagnose problems and fix them. Such a platform allows people with a limited machine learning experience to create and interpret models in an unbiased way. Arthur AI allows users to import any kind of machine learning model they want and provides an interface that visualizes the data being processed and the solutions created by the model. The explainability part comes in where users can be alerted on bug fixes as well as why models are making decisions [6]. This is simply one service offered of many, with Google starting their own explainable service called “XAI” which is similarly addressing these concerns over bias and comprehension with AI models.
Another relevant figure in this conversation includes the data analytic company FICO, Google, and UC Berkeley who are partnering to empower people to seek solutions in XAI by creating an Explainable Machine Learning Challenge. FICO, a San Jose-based company known for creating a credit scoring methodology (the “FICO” score), is a leader in machine learning technology within their Decision Management Suite. Many companies use this to automate decision-making processes that are crucial for credit risk evaluation and processing extremely large amounts of data [7]. In this challenge, they are hoping to expand into the research area of algorithmic explainability. Using a real-world financial dataset, they are tasked with creating accurate and explainable machine learning models that must describe the functionality of a trained model [7]. Such challenges and the widespread involvement of data scientists, college students, and programmers indicate the growth of this area of research that will be present in the years to come.
Companies like FICO that are within the credit score or loan-related companies are the main leaders in the movement towards explainable AI. Let’s say you applied for a loan and got rejected, an XAI service would be able to provide a justification such as “your loan application was rejected due to lack of sufficient income proof” [5]. Behind the scenes, the ML model is working with weighted mathematical models and statistical learning in order to make a statement without bias or judgment, especially where income or economic status is concerned. Decision processes like these are crucial to the financial world, and thus they are pioneering this field [5].
Artificial intelligence has surged in its popularity in the past few decades, and most likely will continue to do so. Although it is equipped with limitless possibilities for social good, traditional AI’s concerns over security and transparency make XAI an excellent solution for many of these issues. Additionally, it empowers people who may not be fluent in machine learning technology to still be able to understand AI models and their results. As we move into a new digital age and transformative social justice, explainable AI will continue to grow as companies and individuals utilize it for unbiased, comprehensive models.
Works Cited
[1] “Introductory Guide to Artificial Intelligence,” TowardsDataScience, May 22, 2018. https://towardsdatascience.com/introductory-guide-to-artificial-intelligence-11fc04cea042
[2] “Artificial Intelligence Market Size, Share & Trends Analysis Report By Solution, By Technology (Deep Learning, Machine Learning, Natural Language Processing, Machine Vision), By End Use, By Region, And Segment Forecasts, 2021 – 2028.” [Online]. Available: https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market
[3] Woodie, Alex, “Opening Up Black Boxes with Explainable AI,” May 30, 2018. [Online]. Available: https://www.datanami.com/2018/05/30/opening-up-black-boxes-with-explainable-ai/
[4] General Data Protection Regulation. 2016. [Online]. Available: https://gdpr.eu/
[5] T. Sarkar, “Google’s new ‘Explainable AI” (xAI) service,” Nov. 25, 2018. https://towardsdatascience.com/googles-new-explainable-ai-xai-service-83a7bc823773
[6] “Arthur AI.” [Online]. Available: https://trust.arthur.ai/explainable-ai?campaignid=14444343570&adgroupid=124886787725&utm_term=explainable%20ai&utm_campaign=&utm_campaign=TOF+-+Search+-+USA+-+Explainable+AI+-+BMM&utm_source=adwords&utm_medium=ppc&hsa_acc=8377995260&hsa_cam=14444343570&hsa_grp=124886787725&hsa_ad=542231653148&hsa_src=g&hsa_tgt=kwd-412841098014&hsa_kw=explainable%20ai&hsa_mt=e&hsa_net=adwords&hsa_ver=3&gclid=Cj0KCQjww4OMBhCUARIsAILndv6A4zKAkRYWkj0wkRz-Lx9OHUwII0Z8EqB4h6m3eznuJmMFl656hggaAnybEALw_wcB
[7] “Explainable Machine Learning Challenge.” [Online]. Available: https://community.fico.com/s/explainable-machine-learning-challenge