andreas128 / SRFlow

Official SRFlow training code: Super-Resolution using Normalizing Flow in PyTorch
Other
829 stars 112 forks source link

Why NLL is negative during the training? #54

Closed IMSEMZPZ closed 1 year ago

IMSEMZPZ commented 2 years ago

Thanks for your impressive work. During the training process, we found that the output NLL is negative. But theoretically, NLL should be positive. Is there any explanation for this?

Boltzmachine commented 1 year ago

Same question. It confuses me a lot

martin-danelljan commented 1 year ago

Hi.

Note that the NLL -log(p) for a discrete distribution p is always positive since 0 <= p <= 1.

However, for a continuous distribution we instead have a probability density p. We have p >=0. The other main constraint is that p should integrate to 1 over the domain. So there is nothing stopping p to be larger than one at a point x. And therefore -log(p(x)) may be negative.

For more details, please read about probability densities https://en.wikipedia.org/wiki/Probability_density_function

Boltzmachine commented 1 year ago

Oh got it! Thanks so much!