Open IlyaTyagin opened 1 year ago
cc @RBendias
Only the methods Saliency, InputXGradient, Deconvolution, FeatureAblation, ShapleyValueSampling, IntegratedGradients, GradientShap, Occlusion, GuidedBackprop, KernelShap, and Lime
work at the moment. For DeepLift
we need batch support which we are currently working on.
Got it, thanks.
I'm particularly interested to check DeepLift
because it's claimed to run faster and produce comparable to IntegratedGradients
results from captum description. I'm running large-scale explainability experiments, so the runtime part is crucial.
Hello,
Just to mentioned that I am also interested in this feature (specifically GradCAM
). Do you have an idea when this could be handled ?
Thank you very much
We are trying to make CaptumExplainer
feature-complete till PyG 2.3 (March 21).
Hello! Sorry to bother again with this, what are the news here ? π
π Describe the bug
When I'm trying to use DeepLift explainabiliy method from captum, I'm getting the AssertionError related to dimensionality of the input mask.
Code to reproduce the error is taken from the
captum_explainability
example:Training part:
Integrated Gradients works just fine:
DeepLift part (doesn't work):
Full traceback:
Environment
conda
,pip
, source): source