Closed Daemon-ser closed 3 years ago
全部是长句,并没有混合长短。 https://github.com/ymcui/Chinese-BERT-wwm#模型对比
好的,谢谢
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.
请问roberta的预训练数据是全都512长句,还是说像bert一样有10%的短句?