The apply_image function of BlendTransform only applies clipping pixel values between [0 ,255] after blending to image arrays that arrive in type uint8.
Clipping should also be applied to image arrays that already arrive in type float32, otherwise, we potentially feed a model with images like this (After increasing the brightness by a factor of ~1.3, on an image already in type float32):
https://github.com/facebookresearch/fvcore/blob/8cf4acc89e765b263e2afd8a5bcefa8fd677c5f3/fvcore/transforms/transform.py#L849
The apply_image function of BlendTransform only applies clipping pixel values between [0 ,255] after blending to image arrays that arrive in type uint8. Clipping should also be applied to image arrays that already arrive in type float32, otherwise, we potentially feed a model with images like this (After increasing the brightness by a factor of ~1.3, on an image already in type float32):