hendrycks / outlier-exposure

Deep Anomaly Detection with Outlier Exposure (ICLR 2019)
Apache License 2.0
541 stars 107 forks source link

About the metric you use #14

Closed d12306 closed 3 years ago

d12306 commented 3 years ago

Hi, @hendrycks , thanks for your code!

I am new to Out of distribution detection, and I am not sure about the OOD score you used in the code.

_score.append(to_np((output.mean(1) - torch.logsumexp(output, dim=1))))

Don't we use the maximum probability predicted as the metric to discriminate in-distribution and out-distribution data? where does this score originate?

I appreciate it if you can refer me some papers that proposed this score

Thanks,

hendrycks commented 3 years ago

This is a more numerically stable but mathematically equivalent representation of the cross entropy from the posterior distribution to the uniform distribution.

zjysteven commented 3 years ago

@hendrycks Hi, I have a pretty naive question so I don't open a new issue. Why are you taking the negative predicted/maximum softmax probabilities as the detection score (what's the purpose of the negative sign)? I'm new to OOD, but I thought larger values correspond to positive samples (in-distribution samples). And will this negative sign affect the AUROC results? Thanks in advance!

hendrycks commented 3 years ago

The AUROC is the probability that a randomly sampled OOD example x_o has a higher score than an in-distribution example x_i: AUROC = P( score(x_o) > score(x_i)). If we let the score = -MSP, then AUROC = P(-MSP(x_o) > -MSP(x_i)) = P(MSP(x_o) < MSP(x_i)).