KevLuo / OpenSet_ReciprocalPoints

Re-implementation of the published ECCV '20 paper on reciprocal points for open-set recognition. Currently the state-of-the-art in open-set recognition for computer vision as of 10/2020.
23 stars 7 forks source link

How to run the code? #1

Open wjun0830 opened 3 years ago

wjun0830 commented 3 years ago

I have cloned this github and am trying to execute the code using tiny dataset.

The error messages shows no module named 'zsh' which is used for data sampling i guess.

Can you provide instruction to run the code? and the requirements?

Thanks

KevLuo commented 3 years ago

Hi @wjun0830 , I just updated the code so that the training stage should work on TinyImagenet successfully given the correctly established data paths. I'll have detailed instructions on how to run the code + requirements shortly (a bit busy right now, but let's say by Tuesday).

wjun0830 commented 3 years ago

Thanks. I will wait for that. I still have problems here. It would be great if you give default parameter for the experiment in args!

yunruiguo commented 3 years ago

Could you show me the part on how to compute AUC? Do you use the sklearn package?

yunruiguo commented 3 years ago

I check again. In "collect_prediction.py", you use a function named "calc_auroc()" compute AUC, but there is no implementation about it. I want to know how you compute it. Thank you.

KevLuo commented 3 years ago

@yunruiguo just made the AUROC computation available in evaluate.py. Sorry about that, fragments of code are still stuck in a private research repository and I haven't had a chance to fully test.

KevLuo commented 3 years ago

@wjun0830 I just made many of the args optional with default values, with some more detailed descriptions for each arg, as well as some more running/installation instructions in the README. Let me know if you're still having issues, and be specific if you can.

wjun0830 commented 3 years ago

@KevLuo I have tried to run the code with default parameters and following parameters. python collect_prediction.py 'TINY' 'TEST' 'TRUE' 128 1 1 'OSCRI_encoder' 32 '20closed_split0_closedval10_size32' './checkpoint/pat_30_div_TRUE_gap_TRUE_sched_patience_latsize_128_numrp_1_lambda_0.1_gamma_0.5_dataset_20closed_split0_closedval10_size32_01_64_32_OSCRI_encoder/'

And results are as below. model using global average pooling layer on top of encoder number of prob thresholds: 5778 number of dist thresholds: 9988 Dist-Auroc score: 0.5434347222222222 Prob-Auroc score: 0.4818612777777778

AUROC score is so low.. Can you get the auroc score mentioned in original paper?

yunruiguo commented 3 years ago

I can not get the reported results. @wjun0830

KevLuo commented 3 years ago

@wjun0830 @yunruiguo I'm actually not too surprised by this, there's a few weird things about Tiny ImageNet as a dataset: 1) to get the results for a single run, run one trial on each dataset split (split 0, split 1, etc) and avg the aurocs. Use the auroc strategy that gives you the best value (either all prob-aurocs or all dist-aurocs) 2) tiny imagenet is so small that I personally observed high variance on results between runs. For example, I would run once on split0 and get 54% and then run another time on split0 and get 58%. Furthermore, different splits have extremely different "average" results. For example, split 0 may tend to cluster in the 56% range but split 1 may tend to cluster in the 68% range. Tiny ImageNet is quite problematic for this reason in my opinion. This means the following: for each dataset split, run 10 trials. Average all of these results. 3) If neither of the above two work, I would suggest setting the number of epochs to 300 (I recall using this to get the best). It's possible that I need to slightly tweak the defaults to be exactly what I used (they should have been close)...but the number of epochs should be the main difference. I'll need to re-run on my own machine with this exact code version to make sure. This may take up to two weeks for me to rerun personally because I temporarily lost access to my university's GPU machines (system failure) and regaining access is a bureaucratic process as a university alumni.

Let me know what you guys get after trying the first two strategies! I'm a bit busy during the week right now because of my day job, but I'll try to help as much as possible.

aphroditee666 commented 3 years ago

@KevLuo hi, i got the similar results when using OSRCI backbone, but the results won’t improve when using wide-resnet backbone, did you get the reported results using wide-resnet? Or are there any difference during implementation?

KevLuo commented 3 years ago

Hi @aphroditee666 , great to hear you were able to replicate the results I reported using OSCRI backbone!

In my own experiments, I was unable to reproduce the authors' reported result with wide-resnet of 80.9%. The best I could get was 70%. It's possible the authors used a different version of wide-resnet but as far as I can tell it's basically the same. Please let me know if you end up being able to reproduce their wide-resnet results!

aphroditee666 commented 3 years ago

@KevLuo did you try ‘wide-resnet’ on cifar+10,is the result still the same?

yunruiguo commented 3 years ago

@KevLuo @wjun0830 @aphroditee666 If you work on open set, there is another ECCV2020 paper "Hybrid model for open set recognition", it reported even better results on all AUROC experiments but without publishing code. And I also email all authors several times but no reply. I tried to reproduce this model but the model seems not to converge.

yunruiguo commented 3 years ago

could you share me you code with wideresnet, I want to compare with mine, to make sure we do the same thing

发自我的iPhone

在 2021年3月11日,15:32,aphroditee666 @.***> 写道:

 @KevLuo hi, i got the similar results when using OSRCI backbone, but the results won’t improve when using wide-resnet backbone, did you get the reported results using wide-resnet? Or are there any difference during implementation?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.