Povzetek
In this code pattern, gain better insights and explainability by learning how to use the AI 360 Explainability Toolkits to demystify the decisions that are made by a machine learning model. This not only helps policymakers and data scientists to develop trusted explainable AI applications, but also helps with transparency for everyone. To demonstrate the use of the AI Explainability 360 Toolkit, we use the existing fraud detection code pattern explaining the AIX360 algorithms.
Opis
Imagine a scenario in which you visit a bank where you want to take out a $1M loan. The loan officer uses an AI-powered system that predicts or recommends if you are eligible for a loan and how much that loan can be. In this example, the AI system recommends that you are not eligible for a loan. So, you might have few questions you then need to think about:
- Will you as a customer be satisfied with the service?
- Would you want justification for the decision made by the AI system?
- Should the loan officer double-check the decision made by the AI system, and would you want them to know the underlying mechanism of the AI model?
- Should the bank completely trust and rely on the AI-powered system?
You might agree that it’s not enough to just make predictions. Sometimes, you must have a deep understanding of why the decision was made. There are many reasons why you need to understand the underlying mechanism of the machine learning models. These include:
- Human readability
- Zmanjšanje pristranskosti
- Justifiability
- Interpretabilnost
- Fostering trust and confidence in AI systems
In this code pattern, we demonstrate how the three explainability algorithms work:
- The Contrastive Explanations Method (CEM) algorithm that is available in the AI Explainability 360 Toolkit.
- The AI Explainability 360—ProtoDash works with an existing predictive model to show how the customer compares to others who have similar profiles and had similar repayment records to the model’s prediction for the current customer. This helps to evaluate and predict the applicant’s risk. Based on the model’s prediction and the explanation for how it came to that recommendation, the loan officer can make a more informed decision.
- The Generalized Linear Rule Model (GLRM) algorithm in the AI Explainability 360 Toolkit provides an enhanced level of explainability to a data scientist whether the model can be deployed.
Pretok
- Log in to IBM Watson® Studio powered by Spark, initiate IBM Cloud Object Storage, and create a project.
- Naložite podatkovno datoteko .csv v IBM Cloud Object Storage.
- Naložite podatkovno datoteko v prenosni računalnik Watson Studio.
- Install the AI Explainability 360 Toolkit and the Adversarial Robustness Toolbox in the Watson Studio notebook.
- Pridobite vizualizacijo za razložljivost in interpretabilnost modela AI za tri različne vrste uporabnikov.
navodila
Podrobne korake poiščite v README mapa. Ti koraki pojasnjujejo, kako:
- Ustvarite račun z IBM Cloud.
- Ustvarite nov projekt Watson Studio.
- Dodajte podatke.
- Ustvarite zvezek.
- Vstavite podatke kot DataFrame.
- Zaženite zvezek.
- Analizirajte rezultate.
Ta vzorec kode je del AI 360 Toolkit: razloženi modeli AI serija primerov uporabe, ki zainteresiranim stranem in razvijalcem pomaga v celoti razumeti življenjski cikel modela umetne inteligence in jim pomaga sprejemati odločitve na podlagi informacij.
Source: https://developer.ibm.com/patterns/analyzing-fraud-prediction-ai-models/- Račun
- AI
- algoritem
- algoritmi
- aplikacije
- Arhitektura
- Banka
- telo
- Cloud
- Koda
- zaupanje
- vsebina
- Trenutna
- datum
- podatkovni znanstvenik
- Odkrivanje
- Razvoj
- Razvijalci
- Pojasnjevanje
- Razložljiv AI
- Pretok
- goljufija
- Kako
- Kako
- HTTPS
- IBM
- IBM Cloud
- vpogledi
- IT
- učenje
- Stopnja
- posojila
- strojno učenje
- Model
- Shranjevanje objektov
- Častnik
- drugi
- Vzorec
- napoved
- Napovedi
- Profili
- Projekt
- Razlogi
- evidence
- Rezultati
- Tveganje
- Znanstveniki
- Serija
- So
- shranjevanje
- sistem
- Preglednost
- Zaupajte
- Uporabniki
- vizualizacija
- Watson
- Studio Watson
- WHO
- delo
- deluje