When we use pytensor.config.floatX=="float32", integer data is downcast to "int16" which has a pretty limited range of 32k. For count-based likelihoods this is way too narrow. I am not sure we should be doing anything with integers to begin with. Why are PyTensor casting rules (and customization flags) not sufficient?
Description
When we use
pytensor.config.floatX=="float32"
, integer data is downcast to"int16"
which has a pretty limited range of 32k. For count-based likelihoods this is way too narrow. I am not sure we should be doing anything with integers to begin with. Why are PyTensor casting rules (and customization flags) not sufficient?