issues
search
TysonYu
/
AdaptSum
The code repository for NAACL 2021 paper "AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive Summarization".
Creative Commons Attribution 4.0 International
35
stars
2
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Can you provide a model of three different pre-training methods for the second stage that you have already trained?
#4
shell-nlp
opened
2 years ago
0
question about learning rate
#3
HiXiaochen
opened
3 years ago
0
gradient explosion in TAPT DAPT pretraining
#2
Danshi-Li
opened
3 years ago
2
Number of Finetuning Steps for TAPT/DAPT/SDPT
#1
yashgupta-7
opened
3 years ago
5