XAI – eXplainableAI 

[wtm_mlop_cats] XAI is a Machine Learning library that is designed with AI explainability in its core. XAI contains various tools that enable for analysis and evaluation of data and models. The XAI library is maintained by The Institute for Ethical AI & ML, and it was developed based on the 8 principles for Responsible Machine […]

woe 

[wtm_mlop_cats] Tools for WoE Transformation mostly used in ScoreCard Model for credit rating Features Split tree with IV criterionRich and plentiful model eval methodsUnified format and easy for outputStorage of IV tree for follow-up use Official website Link Tutorial and documentation Click here to view See more MLOps tools and solutions

TreeInterpreter 

[wtm_mlop_cats] Package for interpreting scikit-learn’s decision tree and random forest predictions. Allows decomposing each prediction into bias and feature contribution components as described in http://blog.datadive.net/interpreting-random-forests/. For a dataset with n features, each prediction on the dataset is decomposed as prediction = bias + feature_1_contribution + … + feature_n_contribution. Features DecisionTreeRegressorDecisionTreeClassifierExtraTreeRegressorExtraTreeClassifierRandomForestRegressorRandomForestClassifierExtraTreesRegressorExtraTreesClassifier Official website Link Tutorial and […]

themis-ml  

[wtm_mlop_cats] themis-ml defines discrimination as the preference (bias) for or against a set of social groups that result in the unfair treatment of its members with respect to some outcome. It defines fairness as the inverse of discrimination, and in the context of a machine learning algorithm, this is measured by the degree to which […]

Themis 

[wtm_mlop_cats] Themis is an open-source high-level cryptographic services library for securing data during authentication, storage, messaging, network exchange, etc. Themis solves 90% of typical data protection use cases that are common for most apps. Themis helps to build both simple and complex cryptographic features easily, quickly, and securely. Themis allows developers to focus on the […]

tensorflow’s Model Analysis 

[wtm_mlop_cats] TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer. These metrics can be computed over different slices of data and visualized in Jupyter notebooks. Features TensorFlow Model Analysis […]

tensorflow’s lucid 

[wtm_mlop_cats] Lucid is a collection of infrastructure and tools for research in neural network interpretability. We’re not currently supporting tensorflow 2! If you’d like to use lucid in colab which defaults to tensorflow 2, add this magic to a cell before you import tensorflow: %tensorflow_version 1.x Lucid is research code, not production code. We provide […]

Tensorflow’s cleverhans

[wtm_mlop_cats] This repository contains the source code for CleverHans, a Python library to benchmark machine learning systems’ vulnerability to adversarial examples. You can learn more about such vulnerabilities on the accompanying blog. The CleverHans library is under continual development, always welcoming contributions of the latest attacks and defenses. In particular, we always welcome help towards […]

Tensorboard’s Tensorboard WhatIf

[wtm_mlop_cats] The What-If Tool (WIT) provides an easy-to-use interface for expanding understanding of black-box classification and regression ML models. With the plugin, you can perform inference on a large set of examples and immediately visualize the results in a variety of ways. Additionally, examples can be edited manually or programmatically and re-run through the model […]

Snitch ai

[wtm_mlop_cats] Automated scientific validation for your ML models in a few clicks. Expert validation of your models so you can deploy in confidence. Maximize the ROI of your AI investments. Snitch AI Makes ML Model Validation Simple Features Data DriftFeature BiasSensitivity to extreme noiseSensitivity to random noiseOver-fitting*Labeling Errors*Data Leakage*Under-fitting*Model Simplification*Feature Discrimination/Pruning* Official website Link Tutorial […]

Enter your contact information to continue reading