hendrycks / outlier-exposure

Deep Anomaly Detection with Outlier Exposure (ICLR 2019)
Apache License 2.0
541 stars 107 forks source link

Which subset did you use for Places365 as OOD? #2

Closed igormq closed 5 years ago

igormq commented 5 years ago

Which subset did you use for Places365 as OOD?

There are a plenty of options in the Places365 website. :)

Best, Igor.

hendrycks commented 5 years ago

We used Places365-Standard (256x256) for the in-distribution. For the source of outliers for OE, we used ImageNet-22K. Places69 (256x256) was used during testing.

Best, Dan Hendrycks

On Wed, Jan 16, 2019 at 8:38 AM Igor Macedo Quintanilha < notifications@github.com> wrote:

Which subset did you use for Places365 as OOD?

There are a plenty of options in the Places365 website. :)

Best, Igor.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/hendrycks/outlier-exposure/issues/2, or mute the thread https://github.com/notifications/unsubscribe-auth/ALIUTojXdpd7kp_Ytznazz5Uf4c0rI4kks5vD1V2gaJpZM4aDeNY .

igormq commented 5 years ago

Thank you, Hendrycks! Have you used the train images (90k+) or the test images (6k+)?

hendrycks commented 5 years ago

I used both Places69 sets together for testing since both sets are unseen.

wetliu commented 4 years ago

Hi Dan. I rerun your wrn test code on CIFAR10 (no retrain or fine tune) and everything seems to be consistent except for the Places365. You mentioned that you used different subsets of Places365. Does it only apply to the case when Places365 is D_in?

If I use cifar10, here is my setting: D_in: Cifar10 D_oe: 80m tiny images D_test: Places69 (256x256) (my comment: it is neither Places365-Standard nor Places365-Challenge 2016. I download the dataset from http://data.csail.mit.edu/places/places_extra69/data_256_extra.tar, which is almost at the bottom of the page.)

I put the results as git diff here, comparing your git log and my new results: -FPR95: 17.28 +/- 1.26 -AUROC: 96.23 +/- 0.20 -AUPR: 87.27 +/- 0.42 +FPR95: 11.22 +/- 0.44 +AUROC: 97.73 +/- 0.08 +AUPR: 90.32 +/- 0.21

Thank you so much for your time to help!

hendrycks commented 4 years ago

Sorry. Places69 was only used for testing the Places365 classifier. The other models use Places365 test images as OOD data. Earlier I used a subset of places365_standard/test, and now I have a faster computer so I currently just randomly sample from places365_standard/test.

On Tue, Feb 25, 2020 at 4:45 AM wetliu notifications@github.com wrote:

Hi Dan. I rerun your wrn test code on CIFAR10 (no retrain or fine tune) and everything seems to be consistent except for the Places365. You mentioned that you used different subsets of Places365. Here is how I understand from your previous response: D_in: Cifar10 D_oe: 80m tiny images D_test: Places69 (256x256) (my comment: it is neither Places365-Standard nor Places365-Challenge 2016. I download the dataset from http://data.csail.mit.edu/places/places_extra69/data_256_extra.tar, which is almost at the bottom of the page.)

I put the results as git diff here, comparing your git log and my new results: -FPR95: 17.28 +/- 1.26 -AUROC: 96.23 +/- 0.20 -AUPR: 87.27 +/- 0.42 +FPR95: 11.22 +/- 0.44 +AUROC: 97.73 +/- 0.08 +AUPR: 90.32 +/- 0.21

Thank you so much for your time to help!

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/hendrycks/outlier-exposure/issues/2?email_source=notifications&email_token=ACZBITXFBDWB4VV4TEPPAJLREUHIDA5CNFSM4GQN4NMKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEM32HVY#issuecomment-590848983, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACZBITWYJHJ34H2SYXP244TREUHIDANCNFSM4GQN4NMA .