InterpretMLModel interpretability

InterpretML is an open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof. With this package, you can train interpretable glassbox models and explain blackbox systems. InterpretML helps you understand your model’s global behavior, or understand the reasons behind individual predictions.


Interpretability is essential for:

Model debugging – Why did my model make this mistake?
Feature Engineering – How can I improve my model?
Detecting fairness issues – Does my model discriminate?
Human-AI cooperation – How can I understand and trust the model’s decisions?
Regulatory compliance – Does my model satisfy legal requirements?
High-risk applications – Healthcare, finance, judicial, …

Official website

Tutorial and documentation

Enter your contact information to continue reading