Thanks for open source. Some questions:
Should pretrained model in phase 1(Full-Sentence Pre-training) be a finetuned model for phase 2(Simultaneous Translation Fine-Tuning). For example, use a parameter “--restore-file” in fairseq to train the model phrase 2?
Thanks for open source. Some questions: Should pretrained model in phase 1(Full-Sentence Pre-training) be a finetuned model for phase 2(Simultaneous Translation Fine-Tuning). For example, use a parameter “--restore-file” in fairseq to train the model phrase 2?