Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at contact@unitary.ai.
I tried to fine tune the Roberta model on my own for unintended bias in comments classification.
Unfortunately, all of the subgroup AUCs are close to 0.5.
I used the standard configuration given in this repo.
Does anyone tried fine tuning the models on their own?
I also evaluated the unbiased model given in the repo, leading to a similar result:
Thanks in advance,
/M
EDIT:
Hi again!
My fault --> I evaluated on public_test.csv and computed the bias AUCs on private_test.csv :-(
Now it is working :-)
Hi and thanks for this great repository!
I tried to fine tune the Roberta model on my own for unintended bias in comments classification. Unfortunately, all of the subgroup AUCs are close to 0.5.
I used the standard configuration given in this repo.
Does anyone tried fine tuning the models on their own?
I also evaluated the unbiased model given in the repo, leading to a similar result:
Thanks in advance,
/M
EDIT:
Hi again!
My fault --> I evaluated on public_test.csv and computed the bias AUCs on private_test.csv :-( Now it is working :-)
/M