-
**Problem Description:**
The icons in the advanced search are confusing where they are currently placed.
**Expected Behavior/Solution:**
Move the icons to the row above, or next to the relationsh…
-
Thanks for your exciting work! I have two questions about the temporal feature aggregation:
1. Does the AFFM use deformable attention, or only use original attention?
2. Does the AFFM shared paramet…
-
Missing key(s) in state_dict: "lstm.lstm.weight_ih_l0_reverse", "lstm.lstm.weight_hh_l0_reverse", "lstm.lstm.bias_ih_l0_reverse", "lstm.lstm.bias_hh_l0_reverse", "output_layers.0.weight", "output_laye…
ghost updated
4 years ago
-
RuntimeError: Error(s) in loading state_dict for ConvLSTM:
Missing key(s) in state_dict: "lstm.lstm.weight_ih_l0_reverse", "lstm.lstm.weight_hh_l0_reverse", "lstm.lstm.bias_ih_l0_reverse", "l…
-
Hi,
I use the concat_paired_end.pl to combine my forward and reverse files (each 16 G), the forward and reverse files were trimed using the kneaddata. However, the concatenated file only have a size …
-
Hi,
For the gradients list in this fuction: https://github.com/jacobgil/vit-explain/blob/main/vit_grad_rollout.py#L9
do we need to reverse the gradients? Since the attention is accumulated in the fo…
-
As the title says, I'm looking for people who are willing to co-maintain this package. As of now, this package has managed to cross 2k+ downloads/week on NPM and as such, with the influx of consumers …
-
This line may reverse the weight. when MAX - Attention, the positions with max attention weight becomes zero. I also did not find relevant information in the paper. Why add this line?
-
There seems to be a mismatch between pre-trained model and model written in code files.
RuntimeError: Error(s) in loading state_dict for ConvLSTM:
Missing key(s) in state_dict: "lstm.lstm.weight…
-
I used the command "python -m SeqSNN.entry.tsforecast ./exp/forecast/ispikformer/ispikformer_metr-la.yml". But the values of the forecast results I got are basically decimals, which does not corresp…