-
Hi, I was looking at the tutorial on attention section, and it says attention vector = **tanh**( Wc [c; h] ). The attention wrappers in the seq2seq library use the attention_layer to perform an affine…
-
I used attention here.
```
def decoding_layer(dec_input,encoder_outputs, encoder_state,source_sequence_length,
target_sequence_length, max_target_sequence_length,
…
-
hi,
I downloaded the model from https://huggingface.co/microsoft/codebert-base/tree/main and using to run an inference (without fine-tuning). But unable to load the model file pytorch_model.bin as th…
-
What I ran were following lines, which is exactly the same example on the tutorial page(nmt)
ad26kr@ubuntu:~/utils/seq2seq$ python -m bin.train --config_paths="
./example_configs/nmt_small…
-
OS: macOS Sierra version 10.12.5
TensorFlow Version: v1.2.0-rc2-21-g12f033d 1.2.0
This is related to tensorflow.contrib.seq2seq. I would like the ability to visualize the attention weights of the …
-
Hi, thanks for your wonderful sharing.
In a3c_gcn_seq2seq/net.py, method forward(...) of class Decoder, "context, attention = self.att(hidden_state, encoder_outputs, mask)", but the context is not u…
-
Hi,
recently, I am researching about Keyphrase generation. Usually, people use seq2seq with attention model to deal with such problem. Specifically I use the framework: https://github.com/memray/se…
-
I am seeing a problem somewhat similar to [https://github.com/google/seq2seq/issues/170](#170) but slightly different. In my case:
* Was able to train a character-level NMT model without problem. B…
-
I'm following the tutorial (https://google.github.io/seq2seq/nmt/), but can't run the code.
Actually, I'm not understanding the code explained in the tutorial. Do I have to type all those arguments…
KyonP updated
6 years ago
-
Any ideas on how to incorporate attention model from http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html ?