Open wirsu opened 2 years ago
This is not necessary because the yolo data format is normalized by definition. "https://roboflow.com/formats/yolo-darknet-txt"
"The annotations are normalized to lie within the range [0, 1] which makes them easier to work with even after scaling or stretching images"
Just think about how the yolo format i defined and you will come to the same conclusion.
This is not regarding normalization of image sizes or annotations. This is of the pixel values of the images. https://sparrow.dev/pytorch-normalize/
Usually when training, a normalization step occurs so that pixel values range from 0 to 1 instead of 0 to 255 based on mean and std of pixel values of the whole dataset
Ah you are right, in general - you need to normalize your raw input data too - not only the annotations. Update - i would expect that this logic is part of the dataloader within https://github.com/WongKinYiu/yolov7/blob/main/utils/datasets.py.
I would expect it to be there too.... Cant find any trace of it
@wirsu RGB normalization doesn't appear to be utilized in the dataloader. I stepped through the dataloader and the maximum pixel values were all near 255. Scaling to [0,1] actually happens in the training loop and no further modifications are done afterwards. I think the only possibility is that there's some more processing in model
, but I would be surprised.
Hi, Have a question on using custom image normalization params for YOLOV7 (mean and std for normalizing images) (and thus leading to deploying this model for inference with custom image normalization)
I see in these lines there's is a mentioning of the default imagenet normalization params but no way to change them https://github.com/WongKinYiu/yolov7/blob/9a00c173a44bfcf3a58cc2a60e37deb6a1fb9e03/utils/torch_utils.py#L232-L237
How can i set custom image normalization params for YOLOV7 or is there an explanation for why it is not possible/necessary?
I'm also aware that because of good batch normalization, initial image normalization isn't a huge issue. Training and running inference on YOLOV7 hasn't been a problem But i'm conscientious that my dataset is very out of distribution compared to the imagenet dataset.
I can easily find how this is done in yolov5: https://github.com/ultralytics/yolov5/blob/6f0284763b0f66467dc04e5a5d87e5a68d1d49cd/utils/augmentations.py#L18-L19
Thanks