google-research-datasets / xsum_hallucination_annotations

Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (https://www.aclweb.org/anthology/2020.acl-main.173.pdf).
81 stars 6 forks source link

confusing ids in eval_scores data #6

Closed Lukecn1 closed 3 years ago

Lukecn1 commented 3 years ago

Hi there,

In the eval_scores_xsum_summaries data there are many instances called either "bert_withckpt" or "bert_nockpt" followed by a bbcid. These model names does not appear in the other datasets. This is currently preventing me from using your scores in a project, as I can't reliably map the scores back to the summary-article pair .

Am I missing something, or is this a mistake? :)

shashiongithub commented 3 years ago

Thanks for noticing this. Please use the following mapping:

ptgen: PtGen tconvs2s: TConvS2S bert_nockpt: TranS2S bert_withckpt : BERTS2S