Closed jthakurH closed 8 months ago
Note that setting the input to double precision (which is numpy's default) does lead to the same result as numpy. And passing the input the same way as pytorch in numpy also gives 1.1:
>> print(np.remainder(np.array(-48.4, dtype=np.float32), 1.1))
1.0999984741210977
I'm not sure off the top of my head what the numerical reason for this is but we're consistent with numpy. So lowering priority as this is most likely expected behavior
I have a strong sense of deja-vu about this one. 48.4 are one of those numbers which could not really be represented as power of two, which leads to quite different results (i.e. 48.4 as fp64 is 0x1.8333333333333p+5 but as fp32 the same value is -0x1.833334p+5, which when divided leads to a very result), see: https://godbolt.org/z/vxK4Y3cYK
🐛 Describe the bug
Here result is coming as
-44 * 1.1 = -48.4, hence result should be 0 but we get 1.1
Also numpy is quite accurate on this.
Versions
[pip3] numpy==1.26.4 [pip3] torch==2.2.1 [pip3] torchaudio==2.2.0 [pip3] torchdata==0.7.1 [pip3] torchmetrics==1.2.1 [pip3] torchtext==0.17.0 [pip3] torchvision==0.17.0
cc @ezyang @gchanan @zou3519 @kadeng @albanD