anchor 

[wtm_mlop_cats] An anchor explanation is a rule that sufficiently “anchors” the prediction locally – such that changes to the rest of the feature values of the instance do not matter. In other words, for instances on which the anchor holds, the prediction is (almost) always the same. At the moment, we support explaining individual predictions […]

Alibi 

[wtm_mlop_cats] Alibi is designed to help explain the predictions of machine learning models and gauge the confidence of those predictions. The library is designed to support the widest possible range of models that use black-box methods.The open-source project goal is to increase the capabilities for inspecting the performance of models with respect to concept drift […]

Enter your contact information to continue reading