Open praveen5733 opened 4 years ago
Also do I. Have you solved it?
We are unable to update the github repo at this moment. However, we have recently built another repo which provides ODIN as well as many other OOD detection methods. Can you try this: https://github.com/jfc43/informative-outlier-mining?
I face the same problem @praveen5733 @tangbohu
Can you provide details on your experimental results, so that I can take a look at the difference? It's possible that the performance will have some variations across model runs.
On Thu, Jan 7, 2021 at 2:56 PM lhuber notifications@github.com wrote:
I face the same problem @praveen5733 https://github.com/praveen5733 @tangbohu https://github.com/tangbohu
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/facebookresearch/odin/issues/13#issuecomment-756378878, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABUGVB6C2X2TRLAHY4JGRLLSYYNV5ANCNFSM4TJ6YCBQ .
Hi, all, I have the similar problem. I trained densenet (and wideresnet) on cifar10 where models have normal test accuracy. When I test the model with odin in this task, I saw a pretty huge gap between the results and the reported ones. Maybe I miss something here.
For wideresnet, you can refer to our latest paper: https://github.com/wetliu/energy_ood. It's also recommended to use energy score as it's parameter-free and gives a performance that's comparable or better than ODIN.
For ODIN, you can typically get a ballpark performance estimation by setting the temperature to be T=1000.
I am able to reproduce the results reported in the paper when I use the pretrained models provided in the repo. But when I train a densenet from scratch the results are poorer compared to the report. Did anyone face a similar problem?