IBM AI Explainability 360 

[wtm_mlop_cats] The AI Explainability 360 toolkit, an LF AI Foundation incubation project, is an open-source library that supports the interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. There is no single […]

GEBI

[wtm_mlop_cats] Global Explanations for Bias Identification. With our proposed method, we have identified four different clusters. Each cluster reveals unique characteristics in the look of analyzed data set, related with the skin tone, skin lesions, but also with the presence of the unwanted artifacts. The first and the second cluster seem to group images based […]

FairML  

[wtm_mlop_cats] FairML is a python toolbox auditing the machine learning models for bias. Features Predictive models are increasingly been deployed for the purpose of determining access to services such as credit, insurance, and employment. Despite societal gains in efficiency and productivity through deployment of these models, potential systemic flaws have not been fully addressed, particularly […]

Fairlearn

[wtm_mlop_cats] Fairlearn is a Python package that empowers developers of artificial intelligence (AI) systems to assess their system’s fairness and mitigate any observed unfairness issues. Fairlearn contains mitigation algorithms as well as metrics for model assessment. Besides the source code, this repository also contains Jupyter notebooks with examples of Fairlearn usage. Features Allocation harms. These […]

FACETS 

[wtm_mlop_cats] The facets project contains two visualizations for understanding and analyzing machine learning datasets: Facets Overview and Facets Dive. The visualizations are implemented as Polymer web components, backed by Typescript code and can be easily embedded into Jupyter notebooks or webpages. Live demos of the visualizations can be found on the Facets project description page. […]

ELI5 

[wtm_mlop_cats] ELI5 is a Python library which allows to visualize and debug various Machine Learning models using unified API. It has built-in support for several ML frameworks and provides a way to explain black-box models. Features ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions. It provides support […]

DeepVis Toolbox 

[wtm_mlop_cats] This is the code required to run the Deep Visualization Toolbox, as well as to generate the neuron-by-neuron visualizations using regularized optimization. The toolbox and methods are described casually here and more formally in this paper: Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. Presented […]

DeepLIFT  –

[wtm_mlop_cats] This version of DeepLIFT has been tested with Keras 2.2.4 & tensorflow 1.14.0. See this FAQ question for information on other implementations of DeepLIFT that may work with different versions of tensorflow/pytorch, as well as a wider range of architectures. See the tags for older versions. This repository implements the methods in “Learning Important […]

ContrastiveExplanation (Foil Trees) 

[wtm_mlop_cats] Contrastive Explanation provides an explanation for why an instance had the current outcome (fact) rather than a targeted outcome of interest (foil). These counterfactual explanations limit the explanation to the features relevant in distinguishing fact from foil, thereby disregarding irrelevant features. The idea of contrastive explanations is captured in this Python package ContrastiveExplanation. Features […]

Captum 

[wtm_mlop_cats] Captum (“comprehension” in Latin) is an open source, extensible library for model interpretability built on PyTorch. With the increase in model complexity and the resulting lack of transparency, model interpretability methods have become increasingly important. Model understanding is both an active area of research as well as an area of focus for practical applications […]

Enter your contact information to continue reading