Closed astariul closed 4 years ago
Hi @Colanim ,
Only the encoder is pre-trained in a bidirectional manner, while the decoder is left-to-right, which is controlled by the attention mask matrix. So the fine-tuning process is the same as inference in terms of decoding.
-li
Thanks for the quick response @donglixp !
So if I understood well, for abstractive summarization there is 3 tasks :
Is it right ?
Because the source side has been given. During fine-tuning, we only compute generation loss for the decoder, which is similar to previous seq2seq models. In the paper, we added an extractive loss in the encoder side, but we didn't use it in the repo's example. The released checkpoint can achieve better results even without the extractive loss.
Ok so in the actual code there is only one loss, which is the generation loss for the decoder (so, left-to-right LM based on the summary).
Thank you very much for your answers !
Thanks for open-sourcing the code !
After reading your paper, I have a question about the finetuning procedure for Abstractive summarization (and more generally any Seq2Seq task).
I understand this idea : Similarly to Bert and to UniLM pretraining, finetuning on Abstractive Summarization is masking some token and predicting it in order to learn a bidirectional representation of tokens.
But at inference time, since we don't have access to the whole summary (it is yet to be generated), we can only apply a left-to-right LM.
It seems a pretty big discrepancy between training and testing.
What I don't understand is that people already tried to use BERT (trained as a bidirectional encoder) as a left-to-right LM. But results were really low.
And in your case, results are very high !
So my questions are :
Did I miss something ? Did I misunderstood and there is in fact no discrepancy ?
If I understood right, why do you finetune Seq2Seq model using bidirectional LM, and not left-to-right LM ?