Open pmeier opened 2 years ago
I agree with the proposal for guarding against types that are likely to overflow. I suspect that float16
is also very likely to be problematic with many of our F
kernels. Perhaps it's worth adding an error for that as well.
Related: https://github.com/pytorch/pytorch/issues/35666 https://github.com/pytorch/pytorch/issues/41527 https://github.com/pytorch/pytorch/issues/66707
Regarding the eps, there is torch.finfo(x).tiny, but I think torch.finfo still is not scriptable: https://github.com/pytorch/pytorch/issues/41492
While working on improving performance of
convert_image_dtype
in #6795, I found several cases whereconvert_image_dtype
is silently failing for low precision floating point dtypestorch.float16
andtorch.bfloat16
:Converting an valid (b)float16 image in the value range
[0.0, 1.0]
to any integer dtype overflows the computation. This stems from the fact thateps
is fixed:https://github.com/pytorch/vision/blob/7a62a545ce76f43ccc5cfe0009131f7db14ae7b5/torchvision/transforms/functional_tensor.py#L90-L93
This value is simply to large for (b)float16:
The whole point of
eps
is to be as small as possible to have an even value distribution. See https://github.com/pytorch/vision/pull/2078#issuecomment-613524965 for details.We could simply make
eps
dependent on the input dtype in a function similar tohttps://github.com/pytorch/vision/blob/7a62a545ce76f43ccc5cfe0009131f7db14ae7b5/torchvision/transforms/functional_tensor.py#L47
Converting a int{32, 64} image to float16 should not be possible since it can't hold the maximum values:
We are already raising an error for unsafe float to int conversions
https://github.com/pytorch/vision/blob/7a62a545ce76f43ccc5cfe0009131f7db14ae7b5/torchvision/transforms/functional_tensor.py#L78-L83
so we could simply do the same here.
cc @vfdev-5 @datumbox