GradientShap (captum.attr.GradientShap.attribute), which is an extension of Integrated Gradients, needs an internal_batch_size argument just like IntegratedGradients.
Currently, using any large value for n_samples results in out-of-memory errors, because the input is stacked n_samples times. The same kind of issue is already fixed in IntegratedGradients via the internal_batch_size argument.
🐛 Bug
GradientShap (
captum.attr.GradientShap.attribute
), which is an extension of Integrated Gradients, needs aninternal_batch_size
argument just like IntegratedGradients.Currently, using any large value for
n_samples
results in out-of-memory errors, because the input is stackedn_samples
times. The same kind of issue is already fixed in IntegratedGradients via theinternal_batch_size
argument.