Closed CoderBinGe closed 2 years ago
Hi, the pre-trained model we use is bert-base-uncased. The prediction code of Multi-Scale-BERT can be seen from the file 'model_architechure_bert_multi_scale_multi_loss.py'. For some reasons, the complete training code is not convenient to be pushed now. However, we have published most of the hyper parameters in the paper. If you want to know more details of the implementation, welcome to reply!
Hi, the pre-trained model we use is bert-base-uncased. The prediction code of Multi-Scale-BERT can be seen from the file 'model_architechure_bert_multi_scale_multi_loss.py'. For some reasons, the complete training code is not convenient to be pushed now. However, we have published most of the hyper parameters in the paper. If you want to know more details about the implementation, welcome to reply!
Hi, the pre-trained model we use is bert-base-uncased. The prediction code of Multi-Scale-BERT can be seen from the file 'model_architechure_bert_multi_scale_multi_loss.py'. For some reasons, the complete training code is not convenient to be pushed now. However, we have published most of the hyper parameters in the paper. If you want to know more details about the implementation, welcome to reply!
Ok, thank you for your reply, I will try to reproduce the training code and ask you if I have any problems.
Hello, have you successfully reproduced the code? Would it be convenient for you to share it with me?Thank you so much! @CoderBinGe
Hi, I am very interested in your research, can you post the code of the pretrained model? I don't know if it is convenient, and I will be grateful thank you very much!