ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.29k stars 16.24k forks source link

Inconsistency Between ONNX and PyTorch Model Inference Results in YOLOv5 #12671

Closed LorenzoSun-V closed 8 months ago

LorenzoSun-V commented 8 months ago

Search before asking

YOLOv5 Component

Detection

Bug

I am currently facing an issue with my YOLOv5 model where I observe a discrepancy between the inference results of the PyTorch model (.pt file) and the ONNX model (.onnx file). I have followed the official YOLOv5 guidelines for both training(costom dataset, the size of the images in the dataset ranges from 1920*1024~1920*8192 (h*w)) and conversion processes, yet the results are not consistent. I exported my ONNX model by python export.py --weights ${pth_path} --imgsz 640 --include onnx --simplify, and used python detect.py --weights ${onnx_path} --source ${img_dir} --imgsz 640 --dnn for detection. Do you have any ideas about this phenomenon?Because in other case, it works well.

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

github-actions[bot] commented 8 months ago

👋 Hello @LorenzoSun-V, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics
glenn-jocher commented 8 months ago

@LorenzoSun-V hello! Thanks for reaching out and for your willingness to contribute with a PR. Discrepancies between PyTorch and ONNX model inferences can be due to several reasons, such as differences in preprocessing, model simplification during export, or even slight numerical differences between the frameworks.

Here are a few steps you can take to troubleshoot the issue:

  1. Preprocessing: Ensure that the preprocessing steps are identical for both PyTorch and ONNX inferences. This includes image resizing, normalization, etc.

  2. Model Simplification: Sometimes, the --simplify flag during export can lead to minor changes in the model that might affect the results. Try exporting the ONNX model without the --simplify flag and compare the results.

  3. Numerical Precision: Check if there's a numerical precision difference between the two frameworks. ONNX might be using a different precision (e.g., FP16 vs. FP32).

  4. Model Version: Make sure you're using the same version of the YOLOv5 model for both PyTorch and ONNX.

  5. ONNX Runtime: If you're using the ONNX model with a different runtime (e.g., ONNX Runtime), ensure that it's compatible with the exported model and that there are no known issues with the specific version you're using.

  6. Debugging: You can also try to debug layer by layer by comparing the outputs of each layer between the PyTorch model and the ONNX model to pinpoint where the discrepancy starts.

If you continue to experience issues, please provide a detailed comparison of the results, including any error messages or differences in output, and we can investigate further. Also, check out our documentation for any updates or additional troubleshooting tips.

Thanks for being part of the YOLOv5 community! 🚀

LorenzoSun-V commented 8 months ago

Thank you for your response!

Prior to your reply, I incorporated several negative samples into the training set and fine-tuned the model accordingly. Subsequently, the ONNX model has ceased producing incorrect results but the confidences between Pytorch and ONNX inference results are also different.

The preprocessing steps for both the PyTorch and ONNX models are consistent, as I utilized the same detect.py script from YOLOv5. The key difference lies in the execution flags: ONNX requires the --dnn flag, whereas PyTorch operates without it. I exported ONNX model without --simplify flag and obtained identical results.

Based on the outcomes of the aforementioned experiments, it appears that the limited diversity in my training set samples may be leading to suboptimal generalization capabilities of the model. Additionally, there remain noticeable discrepancies between the PyTorch and ONNX models. I plan to debug layer by layer when I have some free time.

glenn-jocher commented 8 months ago

@LorenzoSun-V, it's great to hear that you've made some progress by fine-tuning with negative samples and that you've ruled out preprocessing as a source of discrepancy. The difference in confidences you're observing now could still be attributed to the inherent differences in how PyTorch and ONNX handle computations, even if the preprocessing is consistent.

The --dnn flag in detect.py indicates that you're using OpenCV's DNN module for ONNX inference, which might handle certain operations differently than PyTorch. This could be a source of the slight variations in confidence scores you're seeing.

Here are a few additional suggestions:

Keep in mind that small differences in confidence scores might be acceptable depending on your application's tolerance for such variations. If the differences are significant, however, it's worth continuing to investigate.

Thank you for your diligence in debugging this issue, and we appreciate your contributions to the YOLOv5 community. If you find a solution or need further assistance, please feel free to reach out again. Good luck with your debugging efforts! 🛠️

Sanath1998 commented 1 month ago

@LorenzoSun-V hello! Thanks for reaching out and for your willingness to contribute with a PR. Discrepancies between PyTorch and ONNX model inferences can be due to several reasons, such as differences in preprocessing, model simplification during export, or even slight numerical differences between the frameworks.

Here are a few steps you can take to troubleshoot the issue:

  1. Preprocessing: Ensure that the preprocessing steps are identical for both PyTorch and ONNX inferences. This includes image resizing, normalization, etc.
  2. Model Simplification: Sometimes, the --simplify flag during export can lead to minor changes in the model that might affect the results. Try exporting the ONNX model without the --simplify flag and compare the results.
  3. Numerical Precision: Check if there's a numerical precision difference between the two frameworks. ONNX might be using a different precision (e.g., FP16 vs. FP32).
  4. Model Version: Make sure you're using the same version of the YOLOv5 model for both PyTorch and ONNX.
  5. ONNX Runtime: If you're using the ONNX model with a different runtime (e.g., ONNX Runtime), ensure that it's compatible with the exported model and that there are no known issues with the specific version you're using.
  6. Debugging: You can also try to debug layer by layer by comparing the outputs of each layer between the PyTorch model and the ONNX model to pinpoint where the discrepancy starts.

If you continue to experience issues, please provide a detailed comparison of the results, including any error messages or differences in output, and we can investigate further. Also, check out our documentation for any updates or additional troubleshooting tips.

Thanks for being part of the YOLOv5 community! 🚀

Hi @glenn-jocher ,

I have 30mIOU difference between Pytorch and Onnx results. Seriously Im not understanding. Im using same pre and post process for both still getting this issue.

My onnx export is as shown below.

torch.onnx.export(model, dummy_input, "Custom.onnx", export_params=True, opset_version=17, do_constant_folding=True, input_names =['modelInput'], output_names = ['modelOutput']) == ============================

I have even used all the opsets from 11-17 and also tried making do_constant_folding=False . Still same issue please help.

glenn-jocher commented 1 month ago

Hi @Sanath1998,

Please ensure you're using the latest YOLOv5 version and verify if the issue persists. If the discrepancy remains, consider debugging layer outputs between PyTorch and ONNX to identify where the divergence occurs. If you need further assistance, feel free to provide more details.

Thank you for your patience.