GithubModel interpretability

An anchor explanation is a rule that sufficiently “anchors” the prediction locally – such that changes to the rest of the feature values of the instance do not matter. In other words, for instances on which the anchor holds, the prediction is (almost) always the same.

At the moment, we support explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data). If there is enough interest, I can include code and examples for images.

The anchor method is able to explain any black box classifier, with two or more classes. All we require is that the classifier implements a function that takes in raw text or a numpy array and outputs a prediction (integer)


* Comparing to SHAP, computation time is less.

* In my previous, I use both SHAP and Anchors to explain the prediction. You may also consider to use multiple model interpreter.

* Label can only accept integer. Means that you cannot pass exact classification name but the encoded category.

Official website

Tutorial and documentation

Enter your contact information to continue reading