Closed hanyaqian closed 6 years ago
Congratulations on your new position!
Regarding your question, I have no experience in multisource MT. I want to test it in NMT-Keras (it should be easy), but unfortunately I don't have so much time right now. Regarding your question, I think that the hardware requirements depend on the model. E.g. If you have 2 (or more) BLSTM encoders (one for each soruce language), the number of parameters will increase significantly. On the other questions, training time, convergence of the model, etc, I have no experience. I guess it will be harder than single-language NMT, but I don't know to which extent.
I will report my experience if I finally make something with multisource NMT. Nevertheless, comments or PR regarding those issues are welcome :)
Thanks :)
@hanyaqian Have you realized Chinese-English translation?
Last contact was ten months ago,when I was looking for an internship. Now I am a full-time employee at a satisfied company, thank you for helping me before. Excuse me, have you ever done an experiment on Multi-Source Neural Translation? Compared with encoder-decoder NMT, how the performance of Multi-Source Neural Translation, for example, training time, hardware requirements and so on.