-
Having some generation issues with NMT models trained with OpenNMT-py, which include OpenNMT-py versions before flash attention existed and one I'm currently training with the most recent which includ…
-
Hello, is the dataset used in seq2seq + attention in the paper multiple rounds or single theory?
Is the multi-round divided into a single round?
-
当运行a1_seq2seq_attention_train.py文件时 遇见下面的错误。希望得到您的帮助。
ValueError: Variable W_initial_state1 already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
File "/…
-
Thanks for your awesome contribution. I was wondering whether I can use this to achieve visual attention. I was thinking of using the seq2seq with attention and feeding the convnet's flatten layer as …
-
-
Seq2Seq(Attention)\Seq2Seq(Attention)-Tensor.py
The shape of the input should be [max_time, batch_size,...]. The input = tf. transpose (dec_inputs, [1, 0, 2]) has already been transformed. In tf. e…
-
Hi,
I'm running the project from source (master) using Python 3.5, and when I change the model from:
`tf.nn.seq2seq.embedding_rnn_seq2seq`
to
`tf.nn.seq2seq.embedding_attention_seq2seq`
o…
-
Thanks for sharing! Just found out `Attention.get_att_weight` is calculating attention in a for-loop? this looks rather slow isn't it?
`4-2.Seq2Seq(Attention)/Seq2Seq(Attention).ipynb`
```pyth…
-
Please can you provide a C# Seq2Seq with Attention Example?
Being new to CNTK and not having much to go from, it is very hard to see how one would approach this.
Thank you so much!
-