Closed KYRIEZX closed 3 years ago
Looks like your model file is corrupt. Did you extract the archive you downloaded? Edit: the LSTM model is not compatible with OpenNMT-py 2.0 yet. We'll have to update it. But the Transformer one should work. Edit2: the LSTM model was just updated, you may download it again.
Looks like your model file is corrupt. Did you extract the archive you downloaded? Edit: the LSTM model is not compatible with OpenNMT-py 2.0 yet. We'll have to update it. But the Transformer one should work. Edit2: the LSTM model was just updated, you may download it again.
Thanks for answering. It seems to work after reloading the transformer model. But the output texts are like this: Is this common?
This is because the model was trained with sentencepiece tokenization. There is an example script on how to prepare data using sentencepiece here: https://github.com/OpenNMT/OpenNMT-py/blob/master/examples/scripts/prepare_wmt_data.sh I encourage you to search the forum to learn more.
My script is as, and the model I use is downloaded from this:
python translate.py --model available_models/en-de.pt --src ../en-de.txt --random_sampling_topk 10 --random_sampling_temp 0.5 --beam_size 1 --gpu 6
When I use English-German - TransformerWhile I use German-English - 2-layer BiLSTM :