1. post hoc interpretation: Given a black box model trained to solve a supervised learning problem(X –> Y, where X is the input and Y is the output), post-hoc interpretation can be thought of as a function(f) ‘g’ with input data(D) and a predictive model. The function ‘g’ returns a visual or textual representation which helps in understanding the inner working of the model or why a certain outcome is more favorable than the other. It could also be called inspecting the black box or reverse engineering.
2. natively interpretable models: Given a supervised learning problem, the predictive model(explanator function) has a transparent design and is interpretable both globally and locally without any further explanations.