Jingkang50 / OpenOOD

Benchmarking Generalized Out-of-Distribution Detection
MIT License
858 stars 108 forks source link

Issues about the "positive" #206

Closed ZouXinn closed 11 months ago

ZouXinn commented 11 months ago

The first paragraph of the paper corresponding to v1 of this code says that " we consider ID samples as positive". However, in the code, specifically, the comments of function "auc_and_fpr_recall(conf, label, tpr_th)" in the file "metrics.py", says that "following convention in ML we treat OOD as positive".

So there may be some inconsistency between the reported results and the paper.

zjysteven commented 11 months ago

Like you noted, treating ID samples as positive was in v1, while all the current code is for v1.5. In v1.5 we decide to follow the convention of ML (and the seminal work MSP) to treat OOD (or anomalies) as "positive" and ID as "negative". The code and v1.5 report is thus consistent. Also, notice that we use AUROC as the main metric, which is agnostic to the binary label.

ZouXinn commented 11 months ago

Thank you for your reply.

zjysteven commented 11 months ago

Closing now, but feel free to reopen if there are any other questions.