pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
83.91k stars 22.62k forks source link

remainder operator output is wrong with other is multiple of input #121224

Closed jthakurH closed 8 months ago

jthakurH commented 8 months ago

🐛 Describe the bug

input = torch.tensor([-48.4])
other = 1.1
print(torch.remainder(input, other))

Here result is coming as

tensor([1.1000])

-44 * 1.1 = -48.4, hence result should be 0 but we get 1.1

Also numpy is quite accurate on this.

np.remainder(-48.4, 1.1)
5.329070518200751e-15

Versions

[pip3] numpy==1.26.4 [pip3] torch==2.2.1 [pip3] torchaudio==2.2.0 [pip3] torchdata==0.7.1 [pip3] torchmetrics==1.2.1 [pip3] torchtext==0.17.0 [pip3] torchvision==0.17.0

cc @ezyang @gchanan @zou3519 @kadeng @albanD

albanD commented 8 months ago

Note that setting the input to double precision (which is numpy's default) does lead to the same result as numpy. And passing the input the same way as pytorch in numpy also gives 1.1:

>> print(np.remainder(np.array(-48.4, dtype=np.float32), 1.1))
1.0999984741210977

I'm not sure off the top of my head what the numerical reason for this is but we're consistent with numpy. So lowering priority as this is most likely expected behavior

malfet commented 8 months ago

I have a strong sense of deja-vu about this one. 48.4 are one of those numbers which could not really be represented as power of two, which leads to quite different results (i.e. 48.4 as fp64 is 0x1.8333333333333p+5 but as fp32 the same value is -0x1.833334p+5, which when divided leads to a very result), see: https://godbolt.org/z/vxK4Y3cYK