mcao516 / EntFA

25 stars 6 forks source link

assertion error for entity positions in evaluation.py #5

Open pkuzengqi opened 2 years ago

pkuzengqi commented 2 years ago

Hi, I found the follwing data samples in xsum test set will meet assert position[1] in end_pos, "- {}\n- {}\n- {}\n- {}\n- {}\n".format(position, tokens, probs, entity, end_pos) error (https://github.com/mcao516/EntFA/blob/5b2e3557f596a31bca491ac82243d1c625b1ddfa/src/EntFA/utils.py#L38) when called by examples/evaluation.py

[240, 349, 398, 646, 679, 957, 1003, 1293, 1501, 1516, 2006, 2069, 2123, 2811, 3160, 3354, 3679, 4015, 4102, 4833, 5392, 5852, 6214, 6564, 6915, 6930, 7379, 7481, 8043, 8159, 8718, 9473, 9648, 9763, 10456, 10717]

mcao516 commented 2 years ago

Hi, can you provide the BBC IDs of those articles?

pkuzengqi commented 2 years ago

If id in the huggingface dataset format means BBCID, then:

from datasets import load_dataset
dataset = load_dataset("xsum", split='train')
error_cases = [240, 349, 398, 646, 679, 957, 1003, 1293, 1501, 1516, 2006, 2069, 2123, 2811, 3160, 3354, 3679, 4015, 4102, 4833, 5392, 5852, 6214, 6564, 6915, 6930, 7379, 7481, 8043, 8159, 8718, 9473, 9648, 9763, 10456, 10717]
for i, d in enumerate(dataset):
    if i in error_cases:
        print(d['id']+'\t')

will get a list of BBC IDs: 36202526 32593929 39474558 36036068 31569808 36092657 34559662 35518696 19380083 35149409 24617644 33304636 35361241 11400950 37709478 34048486 31300982 37882855 36598379 33025540 38782655 38841689 32157204 33656520 27614689 35932945 37624948 35925828 36074292 33550481 21850495 40484051 17550407 37736693 31100890 35152888

mcao516 commented 2 years ago

Thank you. What summaries are you evaluating? the reference summaries or model generated summaries?

pkuzengqi commented 2 years ago

Reference summaries (test.target)

yfqiu98 commented 1 year ago

same problem here, for both reference and model's outputs, any update?

mcao516 commented 1 year ago

same problem here, for both reference and model's outputs, any update?

The BART tokenizer seems to have a strange tokenization behavior when there are accents. Fixed for now by skipping those samples.