Open hyunjunChhoi opened 1 year ago
Good suggestion! We trained from scratch for all baseline methods for fair comparison. In particular, this setting is used in one baseline with greedy sampling [1]. Fine-tuning is indeed interesting and computationally efficient. We plan to tune hyperparameters and provide a script for finetuning in the near future. Please stay tuned.
[1] Chen et al., ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining
Thanks for your great work.
Hello I have some questions in the training method
Normally, in the energy OOD (Liu, 2021)
Training is done with fine-tuning
Why in the POEM , there is no version of fine-tuning ???
all training is done from scratch . Is there any specific reason or something details??