deeplearning-wisc / poem

PyTorch implementation of POEM (Out-of-distribution detection with posterior sampling), ICML 2022
28 stars 2 forks source link

Hello I have some questions in training method #1

Open hyunjunChhoi opened 1 year ago

hyunjunChhoi commented 1 year ago

Thanks for your great work.

Hello I have some questions in the training method

Normally, in the energy OOD (Liu, 2021)

Training is done with fine-tuning

Why in the POEM , there is no version of fine-tuning ???

all training is done from scratch . Is there any specific reason or something details??

alvinmingwisc commented 1 year ago

Good suggestion! We trained from scratch for all baseline methods for fair comparison. In particular, this setting is used in one baseline with greedy sampling [1]. Fine-tuning is indeed interesting and computationally efficient. We plan to tune hyperparameters and provide a script for finetuning in the near future. Please stay tuned.

[1] Chen et al., ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining