toshas / torch_truncnorm

Truncated Normal Distribution in PyTorch
BSD 3-Clause "New" or "Revised" License
79 stars 13 forks source link

NaN variance with a or b as infinity #4

Closed JJMorton closed 3 years ago

JJMorton commented 3 years ago

Hi,

It seems that when a or b are defined as math.inf, the variance becomes undefined. Unless I am missing something, the variance should still be well defined for these cases

For example, comparing to the equivalent scipy function scipy.stats.truncnorm:

I believe it comes down to the line

self._lpbb_m_lpaa_d_Z = (self._little_phi_b * self.b - self._little_phi_a * self.a) / self._Z

Little phi is correctly calculated as zero for the infinite limit, but 0 * math.inf results in a NaN.

As a quick fix, I replaced infinities with zeros in the line above, for example:

self._lpbb_m_lpaa_d_Z = (self._little_phi_b * (self.b if self.b != math.inf else 0) - self._little_phi_a * (self.a if self.a != math.inf else 0)) / self._Z
toshas commented 3 years ago

Hi, thanks for spotting this, Is it handled similarly in scipy? If yes, could you perhaps file a pull request? Originally I wanted to maintain differentiability of moments wrt the parameters, but I'm guessing a and/or b being -inf and inf respectively is a reasonable case which should allow backprop only wrt finite parameters of the distribution. Thanks, Anton

JJMorton commented 3 years ago

I'm not familiar enough with scipy to be sure how it internally handles cases like this, but this fix yields the same behaviour as far as I've tested. I'll submit a pull request.

toshas commented 3 years ago

Fixed in #5