Open csung7 opened 4 years ago
@csung7 Acc@1 seems correct as this is the accuracy of the non-abstaining classifier on the test set. What is Acc@2 that you're seeing on the test set? This would be the accuracy of the DAC and should be > 70%.
@thulas Thanks for your comments but I don't see any accuracy difference between the baseline and your DAC.
@csung7
There should be an Acc@2 number for the test data -- can you report what this is? I don't see it in your output above. This will almost certainly be different from Acc@1, simply because Acc@2 only computes accuracy over the non-abstained samples.
Dear Sunil,
Thank you for sharing your great work. I was trying to reproduce your experiment results on Identifying Systematic Label Noise but with your DAC loss I don't get your accuracy. This is what I got from your code and hyper-parameters:
And from the above running I got:
Can you help me how you get a better number from your DAC loss?
Thank you.