tingofurro / summac

Codebase, data and models for the SummaC paper in TACL
https://arxiv.org/abs/2111.09525
Apache License 2.0
75 stars 19 forks source link

Which version of QuestEval did you use? #3

Closed ryokamoi closed 2 years ago

ryokamoi commented 2 years ago

I guess QuestEval implementation by the authors (repo) is used in this code. Can I ask which version (commit id) you used for your result?

https://github.com/tingofurro/summac/blob/53fae37bbdd3995c50b50a2713d196680966c765/model_baseline.py#L17

tingofurro commented 2 years ago

Hey Ryo,

You are correct, we used the implementation from the official Github repo for the QuestEval comparison: https://github.com/ThomasScialom/QuestEval I unfortunately don't have access to the machine anymore (the machine I used in Berkeley), but I ran experiment in mid-July, so I asssume from the Github history that it was either version 0.2.0 or 0.2.4.

I would hope however that the results would not be affected by the QuestEval version. Are you noticing discrepancies?

ryokamoi commented 2 years ago

Hi Laban,

Thank you for your kind reply! Honestly, I have not tried your code yet.

However, the README of the QuestEval repository says the code has changed from the paper and I observed some differences from the older version. https://github.com/ThomasScialom/QuestEval/blob/main/README.md#summarization

I will try on 0.2.0 or 0.2.4 to reproduce your results.