Open Waste-Wood opened 4 years ago
That 's ok, maybe you need to train a model of your own
That 's ok, maybe you need to train a model of your own
Except the GloVe embedding that I should replace with my own chinese embedding, is there any file that I should replace with my own file? And I have seen training data of chinese is included in these files!
Hope that someone has trained this model in Chinese and would like to share his github link here
I am trying train the model with Chinese dataset from Ontonote5.0, but it seems that the model can not converge, if anyone has done the same work, advice will be welcome.
Thanks for your reply! I just referenced https://github.com/mandarjoshi90/coref for the conversion of Chinese data,maybe you can try it.
On 06/30/2020 13:59, zhuqi wrote:
I am trying train the model with Chinese dataset from Ontonote5.0, but it seems that the model can not converge, if anyone has done the same work, advice will be welcome.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
Thanks for your advice, yes I am trying the bert-coref model now and the performance is not good yet, anyway, still finetune the model. And if you are training it as well, maybe we can share our results.
I am training(not finished) the model of Chinese-coref-bert from https://github.com/mandarjoshi90/coref which is an extension of the e2e-coref model. If you want a further communication,you can contact me by my e-mail:mjjblcu@126.com.
On 07/01/2020 16:43, zhuqi wrote:
Thanks for your advice, yes I am trying the bert-coref model now and the performance is not good yet, anyway, still finetune the model. And if you are training it as well, maybe we can share our results.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
I have seen some files like "char_vocab_chinese.txt", so is there a chinese pretrained model?