hongxin001 / logitnorm_ood

Official code for ICML 2022: Mitigating Neural Network Overconfidence with Logit Normalization
141 stars 14 forks source link

GradNorm performs anomaly #3

Open Z-ZHHH opened 1 year ago

Z-ZHHH commented 1 year ago

Thanks for sharing the great work! I clone the repo and just use the provided pre-trained logitnorm model (I don't change anything). I tried to use gradNorm to evaluate. Firstly, there was a bug :

Traceback (most recent call last): File "test.py", line 103, in in_score = get_ood_gradnorm(args, net, test_loader, ood_num_examples, device, in_dist=True, print_norm=args.print_norm) AttributeError: 'Namespace' object has no attribute 'print_norm'

I fixed it by set print_norm=Fasle. Then, I got a totally confusing result. The FPR95 score is zero. I think it is not possible for a scoring function to totally separate the ood and id samples. Could you please help me with that? I am really confused.

The MSP result is the same as that claimed in the paper. Also, the Odin, energy methods (not listed).

hongxin001 commented 1 year ago

Yes. It is impossible to get such results with the gradnorm score. There might be some bugs in the implementation of gradnorm score of this version (I used another branch for the reported results in my paper). I will solve this issue when I am free. If you want to fix it by yourself, I suggest you to print the average score of in-distribution and OOD data first.