Open majnemer opened 5 days ago
OK, so here is what happens.
atan2f ends up doing its computation in double precision. This computation ends up with a result of:
(lldb) reg read -f"float64" d0
d0 = {-1.1754943508222874E-38}
This is just under -1.17549435082228750797e-38 so this will end up getting flushed to 0 when it gets converted from double precision to single precision if subnormals are disabled.
@pearu, can you please take a look?
Sure, this issue sounds very similar to the data point in https://github.com/jax-ml/jax/issues/24787#issuecomment-2501337976 , that is, on a Mac platform operations on smallest normal value lead to flushing to zero while on other platforms this does not happen. A simple fix is to adjust the test samples as in https://github.com/jax-ml/jax/pull/25117 (use nextafter(min, 1)
instead of min
), otherwise, eliminating these corner case differences between Mac and Linux platforms may be a complicated task.
This test fails //xla/tests:complex_unary_op_test_cpu on my Mac mini.
I hacked the test up a bit to simplify things.
This is using the following input:
After a bit of further investigation, I was able to determine that the root cause is that the implementation of
atan2f
depends on subnormal support even if the inputs and outputs are not subnormal.On
Darwin Kernel Version 24.1.0: Thu Nov 14 18:15:21 PST 2024; root:xnu-11215.41.3~13/RELEASE_ARM64_T6041
, this outputs: