nlpyang / PreSumm

code for EMNLP 2019 paper Text Summarization with Pretrained Encoders
MIT License
1.28k stars 465 forks source link

Did you fine tuning BERT in the abstractive summarizer and BERTSUM #52

Open xdwang0726 opened 4 years ago

xdwang0726 commented 4 years ago

Hello, I am wondering did you fine tuning BERT in the encoder in your abstracitve summarizer and BERTSUM model? (or you just used the pre-trained model)

Thank you!

astariul commented 4 years ago

BERT is indeed finetuned.

https://github.com/nlpyang/PreSumm/blob/29a6b1ace2290808f39c76ae2ef0e92d515fc049/src/train.py#L52

This option is by default set to True.