GinaJihyeonLee / FairFil-Pytorch

MIT License
9 stars 3 forks source link

the bert baseline #4

Open zhenzliu opened 3 years ago

zhenzliu commented 3 years ago

hello, i have two issues when i tried to reproduce your word. First ,the train num metioned in your paper is 183060, but when i use your code, i got 212104.

Secondly, the bert baseline results without debias metioned inyour paper are different from that I directly use the code for calculating SEAT provided by'On measuring social biases in sentence encoders, NAACL19' to calculate the esize of bert before debiasing. thanks for your help.

GinaJihyeonLee commented 3 years ago

Hi, we utilize additional attribute words such as 'queen' and 'king' that are not included in the original paper. If you want to reproduce the original paper, it might be excluded. For the second question, we also adopt the code of 'On measuring social biases in sentence encoders, NAACL19'. However, we observe that the score keeps changing for each trial. Please let me know if you solve this problem:)