Closed SkyAndCloud closed 5 years ago
Sorry for missing this issue. Surely there are other ways we can prove that our implementation is correct, for instance by winning the WMT2018 shared task on news translation for English-German:
Well, I have read your paper and here are my questions:
lm-transformer
on monolingual corpus. Could you provide an example of training a transformer style language model? Just config is OK. Thanks for your great work!Thanks a lot.
Hi, thank you for great work and awesome documents. I have a question after I read your transformer example which is used on WMT2017 English-German corpus that if you have tested marian's performance using this example on WMT2014 English-German corpus and achieve equivalent BLEU score as reported in transformer original paper? I think this point is very important because only via this can you prove your transformer implementation is correct and it is also important for research use. Thanks!