Are rgb images normalized before input into the model for training? I don't find it and it seem that rgb images are only divided by 255 and transformed to tensor. If not, why depth estimation task doesn't need the normalization process, subtract the mean and then divide by the standard deviation, which is a routine process for other CV tasks such as semantic segmentation and object detection?
Are rgb images normalized before input into the model for training? I don't find it and it seem that rgb images are only divided by 255 and transformed to tensor. If not, why depth estimation task doesn't need the normalization process, subtract the mean and then divide by the standard deviation, which is a routine process for other CV tasks such as semantic segmentation and object detection?