pytorch / captum

Model interpretability and understanding for PyTorch
https://captum.ai
BSD 3-Clause "New" or "Revised" License
4.91k stars 494 forks source link

GradientShap's `attribute` method `baselines` argument should be None #253

Closed josejimenezluna closed 4 years ago

josejimenezluna commented 4 years ago

https://github.com/pytorch/captum/blob/5231c2ec8c59f7ca48a7b421adb412ccf21c8364/captum/attr/_core/gradient_shap.py#L16-L35

According to the docs, the baselines parameter in the attribute method of GradientShap is optional, and is replaced with a zero-filled Tensor as the same size as the input if not provided. However at the moment it's a required argument.

NarineK commented 4 years ago

Hi @josejimenezluna, thank you for finding the issue. It is a bug in the documentation. We decided to make baseline a mandatory field for GradientSHAP and let the user specify the distribution of baselines through that variable. If we default is to zero it might not be a very meaningful distribution.

NarineK commented 4 years ago

Fixed with #256