Closed DUT-lujunyu closed 4 months ago
Hi @DUT-lujunyu, thanks for your interest in our work, sorry for the delayed response.
Here's some answers:
annotated
files include annotations from human experts, while the main toxigen file does not. The train
file are the annotations we collected first, which made it into the original paper submission. The test
file contains the annotations collected afterwards (same annotators). Together, they create ~10k human-annotated samples.label
column from in annotated_train.csv
? I do not see that in the original dataset on huggingface.Thanks for your detailed answers! I downloaded the annotated_train.csv from the link of huggingface "https://huggingface.co/datasets/skg/toxigen-data/blob/main/annotated_train.csv", and got the data as follows. The "label" does not seem to agree with the calculation method in the paper. So what does the label refer to?
Sorry for the slow response, this is a strange problem. The annotated_train.csv file indeed has that label
field, but when you download the dataset using huggingface, I don't see it. I believe this label might be whether or not the original intention was to generate hate or non-hate for this instance.
Hi @Thartvigsen,
I have dowloaded the dataset from HuggingFace. However, this version of the dataset is different from the paper's one.
The paper reports a total of 274186 generated prompts.
However, the dataset available on HuggingFace contains 8960, 940, and 250951 prompts in annotated_train.csv
, annotated_test.csv
, and toxigen.csv
, respectively.
Why is that? Am I missing something here?
Also, from your previous responses, I do not understand a few things:
annotated_train.csv
and annotated_test.csv
also present in toxigen.csv
?annotated_train.csv
and annotated_test.csv
should we consider the ground truth?Could you clarify?
Thank you.
Hi @AmenRa thanks for your interest in our work!
I believe the 274k vs 260k issue is from duplication removal but the original resources were made unavailable, so I can't go back and check to be certain, unfortunately
annotated_test.csv
annotated_train.csv
and annotated_test.csv
are not present in toxigen.csv
I don't believe, though this can be double checked by looking for the overlapThanks for the fast reply! However, I am still a bit confused.
The paper reports "We selected 792 statements from TOXIGEN to include in our test set". The shared test set, which you are telling me is the original one, comprises 940 samples.
Could you clarify?
Thanks.
This is a good question and I'm not sure. I don't have access to some of the original internal docs, so this confusion is likely irreducible for us both. I will try to hunt this down. I suspect that the root issue is that at the time of the original submission, we'd gotten annotations for <1k samples. Then at the time of paper acceptance, we'd gotten annotations for ~10k samples, resulting in two versions of the dataset for which we conducted splits. That 792 may be an artifact of the original numbers, not the larger annotated set. The 8960 annotated_train.csv
set should include the annotations collected in the second wave post-submission, but this may have also impacted the count for 792 somehow.
Ok, thanks!
您好,您的邮件已经收到,谢谢
Dear project managers: When I downloaded the original dataset from the link of huggingface "https://huggingface.co/datasets/skg/toxigen-data", I noted that there are another two datasets named "annotated_train.csv" and "annotated_test.csv" besides the file "toxigen.csv". And I have two questions:
Maybe I missed something. I am sincerely looking forward to your reply. Thank you.