interpretml / interpret

Fit interpretable models. Explain blackbox machine learning.
https://interpret.ml/docs
MIT License
6.22k stars 726 forks source link

Question about global explanation and local explanation #476

Open JWKKWJ123 opened 1 year ago

JWKKWJ123 commented 1 year ago

Hi all, The show(ebm_local) and show(ebm_local) functions can show the plots of feature importance and local(subject-wise) prediction very well and I like the plot very well. But I still need to output the global feature importance and local predictions to draw plots that suit my need. I would like to ask is there functions to output the feature importance and local predictions? By the way, I am confused by the local explanation. For the number in red box, is it the output of each feature? For the contribution for the prediction in the blue box, does it range between 0 and 1? Does the contribution for the prediction both related to the global feature importance and the local prediction?

local_explanation

paulbkoch commented 1 year ago

Hi @JWKKWJ123 --

Global feature importances can be obtained via the term_importances function (terms include both individual features and also pairs): https://interpret.ml/docs/ExplainableBoostingClassifier.html#interpret.glassbox.ExplainableBoostingClassifier.term_importances

Local per-feature score contributions can be obtained with the predict_and_contrib function: https://interpret.ml/docs/ExplainableBoostingClassifier.html#interpret.glassbox.ExplainableBoostingClassifier.predict_and_contrib

The number in the red box is the value that you assigned to the feature for this sample and then passed in via the X parameter to explain_local. I suspect given that all the numbers in your example are between 0 and 1 that you're scaling them.

The contribution in the blue box does not range between 0 and 1. For classification the score contributions are in logits, so having a +1 contribution from a single feature would be fairly significant and it appears at least for this model and particular sample that no feature has this level of contribution.

JWKKWJ123 commented 12 months ago

Hi Paul, Thank you so much! I hadn't noticed such a comprehensive tutorial for this package before. I indeed did sigmoid activation for the features (range[0,1]). We applied the EBM for the diagnosis of dementia based on brain MRI, and put it on arxiv: https://arxiv.org/abs/2308.07778

JWKKWJ123 commented 11 months ago

Hi Paul, By read the tutorial I found I can use the function: interpret.preserve() to save the globel explanation into a html file. However, I am wondering whether I can save the local explanation of each subject into a html/png/jpg file? As far as I know, the interpret.preserve() function can't do it.

Harsha-Nori commented 11 months ago

Hey @JWKKWJ123, I left some instructions on doing custom image exports here here: https://github.com/interpretml/interpret/issues/161#issuecomment-689906508

You can use any of the supported plotly image export formats via the kaleido library (which I think includes PNG, HTML, PDF, SVG, etc.)

JWKKWJ123 commented 11 months ago

Hi Harsha, Thank you very much! Now I can output the global and local explanation in html/figure format. I found that I can do it in local environment, but I can't do it in google-colab, it seems that the kaleido is incompatible with google-colab (I can't figure out the reason),

sarmedwahab commented 11 months ago

Do anyone of you know how to set the label size in EBM explanation plots, I am using it for research but its plots labels are very small, making it unreadable in document at 100% resolution.

JWKKWJ123 commented 11 months ago

Do anyone of you know how to set the label size in EBM explanation plots, I am using it for research but its plots labels are very small, making it unreadable in document at 100% resolution.

Actually I have the same question, the features in my experiment have long names. Now I use the function ‘ ebm.term_importances( )’ (train the ebm at first) to output the feature importance and use 'seaborn' package to draw the plots by my self. I also want to ask how to set the font and size of labels in EBM explanation plots?

sarmedwahab commented 11 months ago

I have actually reached out to some researchers in my field, they have used the EBM plots and had good resolution plots in their article, they referred me to go through plotly and matplotlib api docs.