BOOK A MEETING FR Lucid is a collection of infrastructure and tools for research in neural network interpretability. Lucid is research code, not production code. We provide no guarantee it will work for your use case. Lucid is maintained by volunteers who are unable to provide significant technical support. Features Lucid provides dozens of models …
MLOP Category Archives:
LOFO Importance
BOOK A MEETING FR LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and evaluating the performance of the model, with a validation scheme of choice, based on the chosen metric. …
LIME
BOOK A MEETING FR This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data) or images, with a package called lime (short for local interpretable model-agnostic explanations). Features …
Lightly
BOOK A MEETING FR Lightly is a fork of breeze theme style that aims to be visually modern and minimalistic. Lightly is a work in progress theme, there is still a lot to change, so expect bugs! Some applications may suddenly crash or flicker. Features modular framework support for multi-gpu training using PyTorch Lightning easy …
L2X
BOOK A MEETING FR Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018, by Jianbo Chen, Mitchell Stern, Martin J. Wainwright, Michael I. Jordan. Features The code for L2X runs with Python and requires Tensorflow of version 1.2.1 or higher and Keras of version …
keras-vis
BOOK A MEETING FR keras-vis is a high-level toolkit for visualizing and debugging your trained keras neural net models. Currently supported visualizations include: Activation maximization Saliency maps Class activation maps All visualizations by default support N-dimensional image inputs. i.e., it generalizes to N-dim image inputs to your model. Features Currently supported visualizations include: Activation maximization …
InterpretML
BOOK A MEETING FR InterpretML is an open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof. With this package, you can train interpretable glassbox models and explain blackbox systems. InterpretML helps you understand your model’s global behavior, or understand the reasons behind individual predictions. Features Interpretability is essential for: Model debugging – …
Integrated-Gradients
BOOK A MEETING FR Integrated Gradient(IG) computes the gradient of the model’s prediction output to its input features and requires no modification to the original deep neural network. IG can be applied to any differentiable model like image, text, or structured data. Features Sensitivity: To calculate the Sensitivity, we establish a Baseline image as a …
iNNvestigate
BOOK A MEETING FR In the recent years neural networks furthered the state of the art in many domains like, e.g., object detection and speech recognition. Despite the success neural networks are typically still treated as black boxes. Their internal workings are not fully understood and the basis for their predictions is unclear. In the …
IBM AI Fairness 360
BOOK A MEETING FR AI Fairness 360, an LF AI incubation project, is an extensible open source toolkit that can help users examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. Features 10 state-of-the-art bias mitigation algorithms 70 Fairness Metrics Industrial Applications Official website Link Tutorial and documentation …