nlpyang / PreSumm

code for EMNLP 2019 paper Text Summarization with Pretrained Encoders
MIT License
1.29k stars 465 forks source link

Training the BERT large extractive model #244

Open Shashi456 opened 2 years ago

Shashi456 commented 2 years ago

Hello,

Are the batch sizes and accum count for the bert large exactly the same as the base model? I have been trying to get the results but my bert large has been strictly performing worse than the base model( about 3-4 rouge points) and I have no idea why