nlpyang / BertSum

Code for paper Fine-tune BERT for Extractive Summarization
Apache License 2.0
1.46k stars 422 forks source link

How to train TransformerExt baseline? #115

Open GongShuai8210 opened 3 years ago

GongShuai8210 commented 3 years ago

How to train TransformerExt baseline? I just change encoder with 'baseline', the Rouge scores are higher than the article decribes.I doubt that I still use Bert ,but i am not sure how to use Transformer correctly.