XAI – eXplainableAI 

[wtm_mlop_cats]

XAI is a Machine Learning library that is designed with AI explainability in its core. XAI contains various tools that enable for analysis and evaluation of data and models. The XAI library is maintained by The Institute for Ethical AI & ML, and it was developed based on the 8 principles for Responsible Machine Learning.

We see the challenge of explainability as more than just an algorithmic challenge, which requires a combination of data science best practices with domain-specific knowledge. The XAI library is designed to empower machine learning engineers and relevant domain experts to analyse the end-to-end solution and identify discrepancies that may result in sub-optimal performance relative to the objectives required. More broadly, the XAI library is designed using the 3-steps of explainable machine learning, which involve 1) data analysis, 2) model evaluation, and 3) production monitoring.

Features

AI Explanations: Receive a score explaining how much each factor contributed to the model predictions in AutoML Tables, inside your Notebook, or via Vertex AI Prediction API. Read the score explanation documentation.

What-If Tool: Investigate model performances for a range of features in your dataset, optimization strategies, and even manipulations to individual datapoint values using the What-If Tool integrated with Vertex AI.

Continuous evaluation: Sample the prediction from trained machine learning models deployed to Vertex AI. Provide ground truth labels for prediction inputs using the continuous evaluation capability. Data Labeling Service compares model predictions with ground truth labels to help you improve model performance.

Official website

Tutorial and documentation

Enter your contact information to continue reading