-
When I use configuration `--out_dir=./nmt_attention_model --inference_input_file=./nmt_data/train.vi --inference_output_file=./nmt_model/output_infer --inference_ref_file=./nmt_data/train.en` in Pycha…
-
Hello, I've been searched to fix the fastNLP module error..
In the latest version of fastNLP library, we can't use the library below:
```
from fastNLP.modules.attention import AttentionLayer, Mul…
-
# 어텐션 메커니즘 (Attention Mechanism) : Seq2Seq 모델에서 Transformer 모델로 가기까지 | Reinventing the Wheel
[https://heekangpark.github.io/nlp/attention](https://heekangpark.github.io/nlp/attention)
-
regarding this tutorial: https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html
I just have a question (which probably sound very stupid). I am just wondering is it necessary …
-
Hi,
whenever I run train.py file using various parameters or path getting the below error. I am unable to understand the purpose of "train.txt". Please help
Command line args:
{'--checkpoint':…
-
在使用tf.contrib.legacy_seq2seq.embedding_attention_seq2seq()进行训练时,并没有提供输入句子的长度,这意味着每一个pad也都被当作一个token被输入进encoder了?请问有没有把输入端多余的pad给mask掉?
还有,想问一下输入端为何要倒序输入?
-
I try to execute prepared config files for nmt but seems that can not run all but nmt_small only. When I try to run them I get `NotFoundError : Key model/att_seq2seq/OptimizeLoss/model/att_seq2seq/dec…
-
Hi I'm trying to understand the run the training code, but I keep running into the issue on line 998 in `seq2seq.py`. As far as I can tell, it's because the encoder_inputs_tensor shape is (?, ?, 512) …
-
## 0. Paper
@inproceedings{luong-etal-2015-effective,
title = "Effective Approaches to Attention-based Neural Machine Translation",
author = "Luong, Thang and
Pham, Hieu and
…
a1da4 updated
4 years ago
-
http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_recurrent-modern/seq2seq.html
http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_attention-m…