openai / moderation-api-release

MIT License
113 stars 23 forks source link

Can't pre-produce the result reported in paper with 'similar' dataset #1

Closed miniweeds closed 1 year ago

miniweeds commented 1 year ago

I tried to test the moderation API performance with the jigsaw dataset from Kaggle. The performance is quite worse than what was reported in the paper. Why? What am I missing? Here are the parameters for my test:

My Test result:

miniweeds commented 1 year ago

Or is it because the jigsaw dataset in the paper is different from what I used? I tried the non-English jigsaw dataset too. The performance was worse.

miniweeds commented 1 year ago

I see the problem now. The moderation API only classifies the following categories: "hate”, “hate/threatening”, "self-harm”, "sexual”, "sexual/minors”, "violence”, “violence/graphic". The jigsaw dataset I used covers much more categories. That explains why the API got such low AUPRC, the model and the test set don't align. In this case this API is not suitable for the jigsaw type of problems.

Here are the categories in Jigsaw train dataset: severe_toxicity,obscene,identity_attack,insult,threat,asian,atheist,bisexual,black,buddhist,christian,female,heterosexual,hindu,homosexual_gay_or_lesbian,intellectual_or_learning_disability,jewish,latino,male,muslim,other_disability,other_gender,other_race_or_ethnicity,other_religion,other_sexual_orientation,physical_disability,psychiatric_or_mental_illness,transgender,white.