Open salbatarni opened 1 year ago
Hello, I have been trying to replicate the training for you model, however I have not quite succeded so I want to make sure of some things
- bert tokenizer used is bert-base-uncased?
- in the training the pre-trained model passed to DocumentBertCombineWordDocumentLinear and DocumentBertSentenceChunkAttentionLSTM is also bert-base-uncased, with the first 11 layers frozen correct?
1) Bert tokenizer used is bert-base-uncased 2) Yes, all the pre-trained models used are bert-base-uncased, and finetuned with the first 11 layers frozen
Hello, I have been trying to replicate the training for you model, however I have not quite succeded so I want to make sure of some things
- bert tokenizer used is bert-base-uncased?
- in the training the pre-trained model passed to DocumentBertCombineWordDocumentLinear and DocumentBertSentenceChunkAttentionLSTM is also bert-base-uncased, with the first 11 layers frozen correct?
请问你最后有实现训练模型的代码吗?可以分享下吗?
Could you please provide the code you used to train the model? Can you share it with me?
Hello, I have been trying to replicate the training for you model, however I have not quite succeded so I want to make sure of some things