lingochamp / Multi-Scale-BERT-AES

Demo for the paper "On the Use of BERT for Automated Essay Scoring: Joint Learning of Multi-Scale Essay Representation"
56 stars 13 forks source link

Code Request #1

Closed CoderBinGe closed 2 years ago

CoderBinGe commented 2 years ago

Hi, I am very interested in your research, can you post the code of the pretrained model? I don't know if it is convenient, and I will be grateful thank you very much!

iamhere1 commented 2 years ago

Hi, the pre-trained model we use is bert-base-uncased. The prediction code of Multi-Scale-BERT can be seen from the file 'model_architechure_bert_multi_scale_multi_loss.py'. For some reasons, the complete training code is not convenient to be pushed now. However, we have published most of the hyper parameters in the paper. If you want to know more details of the implementation, welcome to reply!

Hi, the pre-trained model we use is bert-base-uncased. The prediction code of Multi-Scale-BERT can be seen from the file 'model_architechure_bert_multi_scale_multi_loss.py'. For some reasons, the complete training code is not convenient to be pushed now. However, we have published most of the hyper parameters in the paper. If you want to know more details about the implementation, welcome to reply!

CoderBinGe commented 2 years ago

Hi, the pre-trained model we use is bert-base-uncased. The prediction code of Multi-Scale-BERT can be seen from the file 'model_architechure_bert_multi_scale_multi_loss.py'. For some reasons, the complete training code is not convenient to be pushed now. However, we have published most of the hyper parameters in the paper. If you want to know more details about the implementation, welcome to reply!

Ok, thank you for your reply, I will try to reproduce the training code and ask you if I have any problems.

juneljl commented 9 months ago

Hello, have you successfully reproduced the code? Would it be convenient for you to share it with me?Thank you so much! @CoderBinGe