Deci-AI / super-gradients

Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
https://www.supergradients.com
Apache License 2.0
4.59k stars 509 forks source link

yolo_nas doesn't need to normalize the image (img=img/255.0)? #1804

Closed FisherDom closed 9 months ago

FisherDom commented 9 months ago

💡 Your Question

I'm using my own dataset for training on yolo_nas_s architecture, but I don't need to adjust the image to [0,1] regardless of whether I set preprocesing to True or False when model.export().So I wonder if it's always been that way

Best wish:)

there is my recipes:defaults: `defaults:

train_dataloader: C4703_detection_yolo_format_train val_dataloader: C4703_detection_yolo_format_val

load_checkpoint: False resume: False

dataset_params: train_dataloader_params: batch_size: 32

arch_params: num_classes: 2

training_hyperparams: resume: ${resume} mixed_precision: True

architecture: yolo_nas_s

multi_gpu: OFF #DDP num_gpus: 1

experiment_suffix: "" experimentname: C4703${architecture}${experiment_suffix}`

Versions

No response

FisherDom commented 9 months ago

and i wanna know when i use SG=3.6.0 net = models.get(Models.YOLO_NAS_S, pretrained_weights="coco") dummy_input = torch.randn([1, 3, 320, 320], device="cpu") torch.onnx.export(net, dummy_input, "yolo_nas_s.onnx", opset_version=11) dose the pretrained_weights had been preprocess and postprocess ? thanks a lot

BloodAxe commented 9 months ago

I'm using my own dataset for training on yolo_nas_s architecture, but I don't need to adjust the image to [0,1] regardless of whether I set preprocesing to True or False when model.export().So I wonder if it's always been that way

Sorry, I can't follow. Can you please elaborate on this statement.

net = models.get(Models.YOLO_NAS_S, pretrained_weights="coco")
dummy_input = torch.randn([1, 3, 320, 320], device="cpu")
torch.onnx.export(net, dummy_input, "yolo_nas_s.onnx", opset_version=11)

dose the pretrained_weights had been preprocess and postprocess ?

If you are using torch.onnx.export than you can't have preprocess and postprocess. You should be using net.export(...) instead to have the preprocessing and postprocessing steps included in the final model.

FisherDom commented 9 months ago

我正在使用我自己的数据集在yolo_nas_s架构上进行训练,但是无论我在 model.export() 时将 preprocesing 设置为 True 还是 False,我都不需要将图像调整为 [0,1]。所以我想知道是否一直都是这样

对不起,我无法关注。请你详细阐述一下这一说法。

net = models.get(Models.YOLO_NAS_S, pretrained_weights="coco")
dummy_input = torch.randn([1, 3, 320, 320], device="cpu")
torch.onnx.export(net, dummy_input, "yolo_nas_s.onnx", opset_version=11)

剂量 pretrained_weights 已经过预处理和后处理?

如果你正在使用,你就不能有预处理和后处理。您应该改用 net.export(...) 来将预处理和后处理步骤包含在最终模型中。torch.onnx.export

I want to deploy the yolo_nas on Qualcomm chips now, so I want to do my own pre- and post-processing outside of the model. But when I export: export_result=net.export("yolo-nas-s_0131convert_none-pre-post.onnx", input_image_shape=(320, 320),preprocessing=False,postprocessing=False,onnx_simplify=True), The resulting onnx model does not need image/255.0, i.e. the result is correct when I don't do normalization, and the result is incorrect after normalization

BloodAxe commented 9 months ago

A part of the preprocessing of YoloNAS includes RGB->BGR conversion as model was trained on BGR images. Perhaps this could be an explanation of why you are getting "incorrect" results.

If you are exporting ONNX with preprocessing, this channel reorder will be added automatically. So your input should be:

Having said that it would be very helpful to see how correct and incorrect predictions look like

FisherDom commented 9 months ago

YoloNAS预处理的一部分包括RGB->BGR转换,因为模型是在BGR图像上训练的。也许这可以解释为什么你得到“不正确”的结果。

如果要导出带有预处理的 ONNX,则将自动添加此通道重新排序。因此,您的输入应该是:

  • 带:RGB输入,uint8型preprocessing=True
  • 带 : BGR 输入,图像/255preprocessing=False

话虽如此,看看正确和不正确的预测是很有帮助的

got it, i can change some code in dataset_params.yaml to define my preprocessing. Thanks a lot!