lingochamp / Multi-Scale-BERT-AES

Demo for the paper "On the Use of BERT for Automated Essay Scoring: Joint Learning of Multi-Scale Essay Representation"
56 stars 13 forks source link

Training a model #5

Open salbatarni opened 1 year ago

salbatarni commented 1 year ago

Hello, I have been trying to replicate the training for you model, however I have not quite succeded so I want to make sure of some things

iamhere1 commented 1 year ago

Hello, I have been trying to replicate the training for you model, however I have not quite succeded so I want to make sure of some things

  • bert tokenizer used is bert-base-uncased?
  • in the training the pre-trained model passed to DocumentBertCombineWordDocumentLinear and DocumentBertSentenceChunkAttentionLSTM is also bert-base-uncased, with the first 11 layers frozen correct?

1) Bert tokenizer used is bert-base-uncased 2) Yes, all the pre-trained models used are bert-base-uncased, and finetuned with the first 11 layers frozen

shield124 commented 1 year ago

Hello, I have been trying to replicate the training for you model, however I have not quite succeded so I want to make sure of some things

  • bert tokenizer used is bert-base-uncased?
  • in the training the pre-trained model passed to DocumentBertCombineWordDocumentLinear and DocumentBertSentenceChunkAttentionLSTM is also bert-base-uncased, with the first 11 layers frozen correct?

请问你最后有实现训练模型的代码吗?可以分享下吗?

shield124 commented 1 year ago

Could you please provide the code you used to train the model? Can you share it with me?