inouye-lab / ShapleyExplanationNetworks

Implementation of the paper "Shapley Explanation Networks"
MIT License
83 stars 16 forks source link

is it possible to explain a black box model with Shapley networks? #3

Open unnir opened 2 years ago

unnir commented 2 years ago

Is it possible to explain a black box model with Shapley networks? Suppose, I have a trained CNN model, and I want to get feature attributions for a single image from IMAGENET data set.

RuiWang1998 commented 2 years ago

Hi,

Thanks for your question. ShapNet has been designed to be an intrinsic explanation model, which means that it explains itself and only itself by design.

However, you could learn a ShapNet that mimic the behavior of your model of choice, which could work in theory, but we have no idea how well.

Best.

unnir commented 2 years ago

Thank you for your answer! Also, I think the paper :+1: