google-research / tapas

End-to-end neural table-text understanding models.
Apache License 2.0
1.15k stars 217 forks source link

When parsing features["label_ids"], why are some features["label_ids"] wrong? #73

Closed lairikeqiA closed 4 years ago

lairikeqiA commented 4 years ago

For example:
When I parse nu-1 (nu-1 how many people were murdered in 1940/41? csv/204-csv/149.csv 100,000) in test.tsv, the parsing features["label_ids"] is (4, 7). However the true features["label_ids] is (1,3). Could you explain this phenomenon?

Q: how many people were murdered in 1940/41? Table: Description Losses 1939/40 1940/41 1941/42 1942/43 1943/44 1944/45 Total Direct War Losses 360,000 183,000 543,000 Murdered 75,000 100,000 116,000 133,000 82,000 506,000 Deaths In Prisons & Camps 69,000 210,000 220,000 266,000 381,000 1,146,000 Deaths Outside of Prisons & Camps 42,000 71,000 142,000 218,000 473,000 Murdered in Eastern Regions 100,000 100,000 Deaths other countries 2,000 Total 504,000 352,000 407,000 541,000 681,000 270,000 2,770,000

ghost commented 4 years ago

I suspect this is an artifact of normalizing the answer. This could happen if a similar string occurs in multiple cells. Is this an example from WTQ, SQA or WikiSQL? What is the true answer text?

ghost commented 4 years ago

I dug into this a bit. The true answer is 100,000 and occurs in three different cells.

Our code will only consider the fist match in this case.

I think this logic could probably be improved, but that would require handling alternative label_ids in the model. Alternatively, one could discard such examples at training time.

lairikeqiA commented 4 years ago

The example is from WTQ. If the logic is improved, the model would have better performance.

ghost commented 4 years ago

That's probably true. The noisy labels might hurt training.

The tricky bit is that one would need to implement some kind of EM strategy to find the correct coordinates when there are multiple options.

ghost commented 4 years ago

I tried to simply drop the ambiguous examples from the WTQ train set this seems to improve the results:

Pretraining Base Expt
WikiSQL - SQA - Inter - Mask LM 0.50230 0.51139
Mask LM 0.39597 0.41459

So I made this the new default. It didn't improve the results when trying this for WikiSQL.