Lucid

[wtm_mlop_cats] Lucid is a collection of infrastructure and tools for research in neural network interpretability. Lucid is research code, not production code. We provide no guarantee it will work for your use case. Lucid is maintained by volunteers who are unable to provide significant technical support. Features Lucid provides dozens of models for visualization without […]

LOFO Importance  

[wtm_mlop_cats] LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and evaluating the performance of the model, with a validation scheme of choice, based on the chosen metric. Features LOFO has […]

LIME

[wtm_mlop_cats] This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data) or images, with a package called lime (short for local interpretable model-agnostic explanations). Features Intuitively, an explanation […]

Lightly  

[wtm_mlop_cats] Lightly is a fork of breeze theme style that aims to be visually modern and minimalistic. Lightly is a work in progress theme, there is still a lot to change, so expect bugs! Some applications may suddenly crash or flicker. Features modular frameworksupport for multi-gpu training using PyTorch Lightningeasy to use and written in […]

L2X  

[wtm_mlop_cats] Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018, by Jianbo Chen, Mitchell Stern, Martin J. Wainwright, Michael I. Jordan. Features The code for L2X runs with Python and requires Tensorflow of version 1.2.1 or higher and Keras of version 2.0 or higher. […]

keras-vis

[wtm_mlop_cats] keras-vis is a high-level toolkit for visualizing and debugging your trained keras neural net models. Currently supported visualizations include: Activation maximizationSaliency mapsClass activation mapsAll visualizations by default support N-dimensional image inputs. i.e., it generalizes to N-dim image inputs to your model. Features Currently supported visualizations include: Activation maximizationSaliency mapsClass activation maps Official website Link […]

InterpretML

[wtm_mlop_cats] InterpretML is an open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof. With this package, you can train interpretable glassbox models and explain blackbox systems. InterpretML helps you understand your model’s global behavior, or understand the reasons behind individual predictions. Features Interpretability is essential for: Model debugging – Why did my […]

Integrated-Gradients 

[wtm_mlop_cats] Integrated Gradient(IG) computes the gradient of the model’s prediction output to its input features and requires no modification to the original deep neural network. IG can be applied to any differentiable model like image, text, or structured data. Features Sensitivity:To calculate the Sensitivity, we establish a Baseline image as a starting point. We then […]

iNNvestigate 

[wtm_mlop_cats] In the recent years neural networks furthered the state of the art in many domains like, e.g., object detection and speech recognition. Despite the success neural networks are typically still treated as black boxes. Their internal workings are not fully understood and the basis for their predictions is unclear. In the attempt to understand […]

IBM AI Fairness 360  

[wtm_mlop_cats] AI Fairness 360, an LF AI incubation project, is an extensible open source toolkit that can help users examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. Features 10 state-of-the-art bias mitigation algorithms70 Fairness MetricsIndustrial Applications Official website Link Tutorial and documentation Click here to view See […]

Enter your contact information to continue reading