TysonYu / AdaptSum

The code repository for NAACL 2021 paper "AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive Summarization".
Creative Commons Attribution 4.0 International
34 stars 2 forks source link

Can you provide a model of three different pre-training methods for the second stage that you have already trained? #4

Open shell-nlp opened 2 years ago

shell-nlp commented 2 years ago

Can you provide a model of three different pre-training methods for the second stage that you have already trained?

The second stage of pre-training is too time-consuming, I want to go directly to the fine-tuning stage.