interpretml / interpret

Fit interpretable models. Explain blackbox machine learning.
https://interpret.ml/docs
MIT License
6.04k stars 714 forks source link

EBM Classifier Global Feature Importance x Random Forest Classifier with Morris Sensitivity Analysis #533

Open gatihe opened 3 weeks ago

gatihe commented 3 weeks ago

I'm trying to use InterpretML to identify the most relevant features for a classification problem.

After applying two different classifiers (EBM Classifier and Random Forest Classifier) over the same data and getting similar scores, I used InterpretML functionality to identify the most relevant features in each model.

EBM feature importance uses weighted Mean Absolute Score while Random Forest Classifier is being used with Morris Sensitivity Analysis. Even though the model performances are very much similar, they are listing different features as most relevant.

This makes me have some questions:

Best regards

paulbkoch commented 3 weeks ago

Hi @gatihe -- The models tend to "think" differently and if the performances are similar it would be difficult to choose which model is a better representation of the underlying generative function. I'm not aware of a way to do this at least. Perhaps @richcaruana has more thoughts on it.

The main benefit you get from using an EBM is that the EBM global explanations are an exact and complete representation of the model itself, so you aren't getting an approximate explanation that would be required from a black box model like a random forest. EBMs make no guarantees however regarding how well they match the underlying generative function. If the only thing you need is a feature importance metric, then I don't think the exactness of the explanation is a critical aspect.

There are also multiple ways that you can measure feature importance, so that's another thing to consider in your scenario. We offer the mean absolute score and the max-min score within the interpret package, but you can also calculate other alternatives yourself like the change in AUC when you remove each feature, etc. Each of these feature importance metrics will tell you different things about your model and data.