GitHub Support CommunityModel interpretability

Skater is a unified framework to enable Model Interpretation for all forms of model to help one build an Interpretable machine learning system often needed for real world use-cases(** we are actively working towards to enabling faithful interpretability for all forms models). It is an open source python library designed to demystify the learned structures of a black box model both globally(inference on the basis of a complete data set) and locally(inference about an individual prediction).

The project was started as a research idea to find ways to enable better interpretability(preferably human interpretability) to predictive “black boxes” both for researchers and practioners. The project is still in beta phase.


1. post hoc interpretation: Given a black box model trained to solve a supervised learning problem(X –> Y, where X is the input and Y is the output), post-hoc interpretation can be thought of as a function(f) ‘g’ with input data(D) and a predictive model. The function ‘g’ returns a visual or textual representation which helps in understanding the inner working of the model or why a certain outcome is more favorable than the other. It could also be called inspecting the black box or reverse engineering.

2. natively interpretable models: Given a supervised learning problem, the predictive model(explanator function) has a transparent design and is interpretable both globally and locally without any further explanations.

Official website

Tutorial and documentation

Enter your contact information to continue reading