microsoft / MASS

MASS: Masked Sequence to Sequence Pre-training for Language Generation
https://arxiv.org/pdf/1905.02450.pdf
Other
1.12k stars 206 forks source link

sequence to sequence part of the code #86

Closed ghost closed 4 years ago

ghost commented 4 years ago

Dear Authors, I was wondering if you have the codes for seq2seq model with BERT encoder to share? I would really appreciate it, so the codes without masking part. thanks

StillKeepTry commented 4 years ago

I have solved your problem via e-mail.