GitHub Support CommunityModel interpretability

SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details and citations).


A model says a bank shouldn’t loan someone money, and the bank is legally required to explain the basis for each loan rejection
A healthcare provider wants to identify what factors are driving each patient’s risk of some disease so they can directly address those risk factors with targeted health interventions

Official website

Tutorial and documentation

Enter your contact information to continue reading