yixinL7 / SimCLS

Code for our paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021
182 stars 25 forks source link

Any try on other generation task ? #14

Closed Hannibal046 closed 2 years ago

Hannibal046 commented 2 years ago

Hi, thanks for your great work. I am wondering have you ever tried this general idea to other NLG tasks like dialogue or NMT? Hoping to get some insights from you guys !

yixinL7 commented 2 years ago

Hi! I have thought about applying this idea to NLG tasks before but I didn't get the chance to try it myself. (It could be interesting :)). I think the general rule of thumb is to keep in mind the difference among different NLG tasks and to verify if the candidate summaries are diverse enough (both lexically and semantically).

Hannibal046 commented 2 years ago

Hi, thanks for reply ! It helps me a lot. And I have another question about reranking part. Did you conduct some preliminary experiments on other reranking module ? Like casting it into a regression problem by (article,candidate) --> rouge or something like this ?

yixinL7 commented 2 years ago

We did try the regression approach in our preliminary experiments and found that the re-ranking loss worked better. There is an ACL2022 paper "SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization" which formulates the re-ranking as a binary classification task.

Hannibal046 commented 2 years ago

OK, I will check this later. Really helps me a lot. Thanks !