-
It seems the integrated gradient, deeplift, etc. compute the gradient of **logits** w.r.t input.
Why not use the probability after softmax? Are there any differences between them?
Thanks
-
Hi, so I am trying to use Deeplift to visualise which area of a sequence is more important for prediction.
As reference I used the dinucleotide shuffled sequence for each normal sequence.
After calc…
-
According to the paper of DeepLIFT, the convergence delta should be zero when the model is pure DNNs without any structures (such as LSTM etc.) and the activation function is ReLU. However, when I try…
-
Hi!
I built a conv1d neural network to classify 1D data series. The data format is (N, 1, 5000).
The accuracy is ~98%.
I tried to use captum to see which portion of the data is more informative. H…
-
I am trying to run the genomics example code on a CNN. My data is images of protein sequences.
Here are the neural network layers after transforming them to deeplift
OrderedDict([('input_3_0', ),
…
-
Hi all, I have recently met with trouble when migrating from captum 0.1.0 to 0.2.0. When I am using the attribution function, I am passing a sparse tensor as additional forward arguments.
However …
-
Hello, I have trained a multi-input (all inputs are DNA sequences) model using Keras that I would like to analyze with deeplift. I have > 200,000 examples, and I would like to use deeplift on all of t…
-
I am trying to use `LayerDeepLift` on multiple layers of a VGG16 model from `torchvision.models`. It works for all layers except `MaxPooling2D` layers.
The following (layer `23` is a `MaxPool2d` la…
-
Hello, I would just like a more detailed description of the baseline parameter when using the deeplift model. Is this something that I need to generate myself using shuffled sequences as a reference? …
-
Hi,
My model uses a bi-directional LSTM and using an attribution method while the model is in evaluate mode throws the following error:
**cudnn RNN backward can only be called in training mode**
…