-
Hi,
I'm using the IWSLT translation dataset in torchtext. However I found the following encoding errors. The code snippet is:
MAX_LEN = 100
train, val, test = datasets.IWSLT.splits(
ex…
-
Hi,
I tried the IWSLT'14 machine translation example on pytorch docker image: `pytorch/pytorch:0.4.1-cuda9-cudnn7-devel`.
.And I got the following error at the end of training:
```
Exception …
-
@nelson-liu: I incorrectly brought this up in pull #52, new issue here
When trying to load splits for IWSLT (in french, german, etc...), the loading process would fail with an ascii encoding/decodi…
-
Excuse me, I followed the example of transformer with the IWSLT'15 EN-VI data, after I ran `python transformer_main.py --run_mode=train_and_evaluate --config_model=config_model --config_data=config_iw…
-
Hi, thanks for the great work.
I've tried training an NMT model on IWSLT 14 with interpolation algorithm, (https://github.com/asyml/texar/tree/master/examples/seq2seq_exposure_bias) but while trainin…
-
I am training my NMT system. I haven't applied subword units yet (just want to compare the results). I noticed I'm only getting symbols while training evaluation which leads to BLEU 0.00.
Here is …
ghost updated
5 years ago
-
I ran the following command from the examples stories tutorial using the pretrained checkpoints and couldn't get it to work. What is the correct command to generate from the pretrained story model? I …
-
I intend to add my model to train on wmt task. But I found ParlAi can only build one dictionary, and only support English tokenize.
Could you tell me how can I do?
-
I have a question as a beginner :)
I've tried the preprocessing, training, and translation with default dataset.
It seems like the result file (pred.txt) repeat "Es ist nicht ." or "Das ist sich…
-
my command line :
CUDA_VISIBLE_DEVICES=2,3 python train.py $TEXT/data-bin/ -a transformer_iwslt_de_en --optimizer adam --lr 0.0005 -s jp -t zh --label-smoothing 0.1 --dropout 0.3 --max-tokens 4000 -…