GitHub Support CommunityModel interpretability

themis-ml defines discrimination as the preference (bias) for or against a set of social groups that result in the unfair treatment of its members with respect to some outcome.

It defines fairness as the inverse of discrimination, and in the context of a machine learning algorithm, this is measured by the degree to which the algorithm’s predictions favor one social group over another in relation to an outcome that holds socioeconomic, political, or legal importance, e.g. the denial/approval of a loan application.

A “fair” algorithm depends on how we define fairness. For example if we define fairness as statistical parity, a fair algorithm is one in which the proportion of approved loans among minorities is equal to the proportion of approved loans among white people.


Here are a few of the discrimination discovery and fairness-aware techniques that this library implements.

Measuring Discrimination
Mean difference
Normalized mean difference
Situation Test Score
Mitigating Discrimination
Relabelling (Massaging)
Model Estimation
Additive Counterfactually Fair Estimator
Prejudice Remover Regularized Estimator
Reject Option Classification
Discrimination-aware Ensemble Classification

Official website

Tutorial and documentation

Enter your contact information to continue reading