pytorch / captum

Model interpretability and understanding for PyTorch
https://captum.ai
BSD 3-Clause "New" or "Revised" License
4.73k stars 476 forks source link

Layer Attribution for Denoising #1121

Open dajtmullaj opened 1 year ago

dajtmullaj commented 1 year ago

Hello! I really love captum and thank you for realizing such a tool. I am currently interested in computing the Layer Attribution (using Conductance) for a UNet. However I use the model for a denoising task, therefore the output is an image with no relation to class labels. I saw an issue regarding a segmentation task where the model output was wrapped using a sum. However in my case would that work (interpreting the class as the sum of output pixels)?

aobo-y commented 11 months ago

@dajtmullaj sorry for our late reply.

Captum does not require your model to be classification. You can pass in any customized forward_fun whose inputs are the features you want to explain and the output is a tensor as the target you want to attribute with respect to.

For you case, if you want to explain the whole image, you should use sum, so the attribution would be the integrated gradients of all the pixels of the output image. You can also select a specific pixel or sum a segment of pixels if you aims explain that single pixel or segment. You can even return a loss you care based on the output image. Totally up to you what you want to explain.