Closed shikhar-srivastava closed 3 years ago
Hi Shikhar,
yes, Captum's implementation of GradientSHAP indeed relies on baselines that are have a random component, and on points along the baseline-input line that are sampled randomly. GradientSHAP further adds white noise to the input points.
To increase the consistency across different runs, I recommend you set the optional parameter n_samples
to 20 or 30 when calling .attribute()
.
Hope this helps.
Issue
Description of case below:
Now, Gradshap generates different attributions on every execution of:
Unlike DeepLift and DeepLiftShap which produce the same attributions on every run of: