Open michaelosthege opened 2 years ago
The default aesara.config.cast_policy
is "custom"
, which does not mimic NumPy's behavior, so parity isn't to be expected. See the documentation on cast_policy
for more information.
Do you get the same result with cast_policy
set to numpy+floatX
? If so, we could consider it a bug for that setting—aside from the apparent over/underflow-like result.
Is there a reason why the casting rules are OS-dependent?
Is there a reason why the casting rules are OS-dependent?
I don't think it is—at least not in the NumpyAutocaster
code that handles the cast_policy
option. There are underlying differences in some dtype choices at the OS/NumPy levels, though.
What are the results for np.array(- 32**2, dtype="int32")
and at.constant(- 32**2, dtype="int32")
on Windows?
Changing the cast policy is effective:
>>> aesara.config.cast_policy = "custom"
>>> aesara.config.cast_policy
'custom'
>>> at.constant(- 2**32)
TensorConstant{18446744069414584320}
>>> aesara.config.cast_policy = "numpy+floatX"
>>> aesara.config.cast_policy
'numpy+floatX'
>>> at.constant(- 2**32)
TensorConstant{-4294967296}
So technically it's not a bug then. But is it a good choice for the default?
What are the results for
np.array(- 32**2, dtype="int32")
andat.constant(- 32**2, dtype="int32")
on Windows?
>>> np.array(- 32**2, dtype="int32")
array(-1024)
>>> at.constant(- 32**2, dtype="int32")
TensorConstant{-1024}
It doesn't appear to cause problems in practice. I just noticed this when moving some PyMC test cases from the Ubuntu to the Windows job in our CI pipeline.
So technically it's not a bug then. But is it a good choice for the default?
There appears to be some overflow issue, but, since it's present in NumPy as well, it's probably not an Aesara bug.
Regarding the default setting, I'm all for changing it to "numpy+floatX"
.
Whereas NumPy correctly uses the
int64
dtype, Aesara doesn't until it's told to:Versions and main components
Aesara config
``` floatX ({'float32', 'float64', 'float16'}) Doc: Default floating-point precision for python casts. Note: float16 support is experimental, use at your own risk. Value: float64 warn_float64 ({'raise', 'warn', 'ignore', 'pdb'}) Doc: Do an action when a tensor variable with float64 dtype is created. They can't be run on the GPU with the current(old) gpu back-end and are slow with gamer GPUs. Value: ignore pickle_test_value (