shap / shap

A game theoretic approach to explain the output of any machine learning model.
https://shap.readthedocs.io
MIT License
22.79k stars 3.28k forks source link

Does the following SHAP explanations make sense for a well-performed MLP-based image classifier #2619

Closed fangzhouli closed 2 weeks ago

fangzhouli commented 2 years ago

Hi,

I am playing around with some ideas and coming up with the following scenario. I used a multilayer perception (MLP) for MNIST image classification. The model performs relatively well, with 92% train (N=60,000) and test (N=10,000) accuracy. And the explanations look like the following.

My question: Though MLP is not the best model for image classification, given the model has relatively good performance, does the following explanations look normal?

Explanations: image

Other information:

github-actions[bot] commented 3 months ago

This issue has been inactive for two years, so it's been automatically marked as 'stale'.

We value your input! If this issue is still relevant, please leave a comment below. This will remove the 'stale' label and keep it open.

If there's no activity in the next 90 days the issue will be closed.

github-actions[bot] commented 2 weeks ago

This issue has been automatically closed due to lack of recent activity.

Your input is important to us! Please feel free to open a new issue if the problem persists or becomes relevant again.