s4zong / extract_COVID19_events_from_Twitter

Annotated corpus and code for "Extracting COVID-19 Events from Twitter".
GNU General Public License v3.0
46 stars 17 forks source link

Questions about data, annotation, codebook #12

Closed ahalterman closed 4 years ago

ahalterman commented 4 years ago

I'm working on the W-NUT challenge and I have some questions about the data annotation process that have come up while looking through a few randomly selected tweets.

In a different tweet (1238241958490931208) "BREAKING NEWS! Arsenal Head Coach Tests Positive For Coronavirus (Read Details) https://t.co/8CnQr7Ua8H", "Arsenal" is part of "name", not "employer".

Another example where the "name" slot seems to have too many words is tweet 1237171610253053952, "@ABC @morningmika @realDonaldTrump @morningMika Where are OUR tests? So South Korea has drive in tests and Germany has drive in tests and the rest of the world has tests and we can't get a test for my very sick child?". Here, the "name" slot includes the entire phrase 'a test for my very sick child'.

Are these artifacts of the noun chunker that was used during annotation?

I know that this is messy text and I know from experience how difficult annotation projects are. Any guidance you can give us on how the annotators were trained, how each slot was defined, measures of coder agreement, etc, would be really helpful for us as we try to build a model!

s4zong commented 4 years ago

Hi,

Thank you for these great questions.

  1. We merge the annotations in the following way. For slot filling questions with text spans, if there is a chunk that is chosen by 3 workers, then it is the consensus annotation. However, we do notice that there are cases like this: 2 workers choose chunk A and 1 worker chooses chunk B, chunk A and chunk B have overlaps. In this case, we check to see if the shortest common text span meets our cutoff of 3 workers. If it meets, then we mark both A and B as correct responses (we do not take the shortest union as the merged annotation, as in our inspection for some cases the longer one seems to be better).

  2. To make the annotation possible in a crowdsourcing platform, we directly provide the annotators with choices for them to choose, rather than ask them to select text spans. Choices are automatically extracted by Twitter tagging tool and mainly contain Noun Phrase chunks. We do notice there are some errors made by the chunker, e.g., containing extra tokens. During annotation, annotators are told it is OK to choose a chunk that contains 2-3 extra tokens, if there is no best fit.

Not sure if I answer your concerns? We could discuss more here.

Thanks,

ahalterman commented 4 years ago

Thanks! That helps a lot.

To clarify (1), how does the evaluation script handle the duplicates when it iterates over the gold spans? [here] If it's checking for an exact match, at least one of the overlapping spans will be a false negative.

Re (2), that makes a lot of sense for annotation. We had started building a token-level classifier so we could use some token-level grammatical features, but I think we'll switch classifying the provided noun chunks.

s4zong commented 4 years ago

Yes I am working on updating the evaluation script for dealing with it. I guess it is also my plan to manually go through all tweets in the test set to fix this issue, i.e., keep only one of those overlapping spans in the candidate choices.