tanay2001 / EFactSum

Improving Factuality of Abstractive Summarization without Sacrificing Summary Quality, ACL 2023
8 stars 0 forks source link

Different reported MINT score #1

Closed zhonying closed 1 year ago

zhonying commented 1 year ago

Hello, thanks for your exciting work!!

I have one question related to the MINT score you reported. The MINT score on your BART model and one reported in the original paper(Evaluating the Tradeoff Between Abstractiveness and Factuality in Abstractive Summarization, Table 7) seem different. Even if you use facebook/bart-large-cnn, there is a gap between yours and the reported one. In the original paper, they used the average of the MINT score. What did you use for your works? Once again, thanks for publishing interesting paper!

tanay2001 commented 1 year ago

Hi, Thanks for your interest in our work. The decoding parameters for BART that Dreyer et al. use may be different. The values we use are in Table 7 in the appendix; they are in line with Cao and Wang 2021.

We also report the average MINT score.

Thanks