amazon-science / long-tailed-ood-detection

Official implementation for "Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition" (ICML'22 Long Presentation)
https://proceedings.mlr.press/v162/wang22aq/wang22aq.pdf
Apache License 2.0
39 stars 5 forks source link

How can we train the baseline model? OE (outlier exposure) #1

Open hyunjunChhoi opened 2 years ago

hyunjunChhoi commented 2 years ago

Hello, Thanks for your work.

How can we train the baseline model such as OE (outlier exposure)

I cannot approach the similar baseline with the OE original code

Could you help

htwang14 commented 2 years ago

Hi, can you try the following command?

python stage1.py --gpu 0 --ds cifar10 --Lambda 0.5 --Lambda2 0 --drp <where_you_store_all_your_datasets> --srp <where_to_save_the_ckpt>

By setting Lambda2 as 0, we are removing the PASCL loss. By setting Lambda as 0.5, we are using the OE loss.

Let me know if you have further questions.

Thank you!

hyunjunChhoi commented 1 year ago

Thanks for your answer.

I can reproduce the OE model

However, I cannot reproduce the EnergyOE

Accuracy of the EnergyOE is far behind from the report of paper

How can I reproduce the EnergyOE ????(I just pasted the EnergyOE fine-tuning code and learn)

shuaiNJU commented 1 year ago

Hi, can you try the following command?

python stage1.py --gpu 0 --ds cifar10 --Lambda 0.5 --Lambda2 0 --drp <where_you_store_all_your_datasets> --srp <where_to_save_the_ckpt>

By setting Lambda2 as 0, we are removing the PASCL loss. By setting Lambda as 0.5, we are using the OE loss.

Let me know if you have further questions.

Thank you!

Hi, thanks to your great work. But I have recently failed to re-produce the results of OE in your paper (e.g., Texture, the 92.59% AUROC and 83.32% AUPR, CIFAR10-LT) by the provided train command and pretrained model. I got the results of:

Texture: 90.77 (AUROC), 73.99 (AUPR)

Any suggestion?