Closed jeff024 closed 3 months ago
Hi @jeff024, I opened the csv files provided in the link. While the OOD detection results look right, the accuracy seems weird (below is the screen shot for CIFAR-10).
I looked at the forward of palm_net
and I suspect that it always outputting features might be the cause for this low accuracy.
https://github.com/Jingkang50/OpenOOD/blob/183f86e2f5756429d643a0c65f43060d7ecea2be/openood/networks/palm_net.py#L27-L33
Like previous methods CIDER and NPOS, our method PALM only trains an image encoder, without any classification head. To obtain the ID accuracy, the only method is with linear probing that trains an standalone classification head with the image encoder trained with our method PALM. So that's why the results of ID accuracy is so weird.
Given these, could you please leave the ID accuracy as NA? Just like CIDER and NPOS in the leaderboard.
Oh I see now. Thanks for the clarification. Will update asap.
Thanks!
PALM entries have been included now. Thank you for the work too.
In correspondence to our recent pull request for adding a new method PALM to OpenOOD. We also aim to contribute to the OpenOOD public leaderboard. Please find the details as follows: Training: PALM, note that we only train the encoder as in CIDER and NPOS Postprocessor: Mahalanobis distance Outlier data: No outlier data needed during training ID Accuracy: Since we only train the encoder, the ID accuracy remains NA ID Datasets: CIFAR100, CIFAR10 For other experiment results and pretrained weights, please refer to the following link: https://unsw-my.sharepoint.com/:f:/g/personal/z5183944_ad_unsw_edu_au1/EiBU2BP8l6VCv19XTaG14CYBOzFe-56g2vTdc2BYPxvQPQ?e=SyAP5d
For your convenient, the files from the shared link follows the exactly the same structure as in OpenOOD benchmark.