LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and evaluating the performance of the model, with a validation scheme of choice, based on the chosen metric.
LOFO has several advantages compared to other importance types:
It does not favor granular features
It generalises well to unseen test sets
It is model agnostic
It gives negative importance to features that hurt performance upon inclusion
It can group the features. Especially useful for high dimensional features like TFIDF or OHE features.
It can automatically group highly correlated features to avoid underestimating their importance.