ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.48k stars 16.29k forks source link

Which version of TensorRT is usable for converting yolov5 model to tensorrt model and running it on docker container? #8480

Open mcagricaliskan opened 2 years ago

mcagricaliskan commented 2 years ago

Search before asking

Question

Hello Everyone,

These days i am trying to run Yolov5 with TensorRT on Docker.

I didn't install TensorRT to my ubuntu because i want to run yolov5 on docker. I tried to use nvcr.io/nvidia/pytorch:22.06-py3 and ultralytics/yolov5 base images. I succesfly runned yolov5m on docker container. but i need more performance because i want to run huge number of video feed with yolov5 for these reason i decied trying to reach TensorRT yolov5 speed.

My graphic card is: RTX 3060

My Question: Which TensorRT version is correct for converting yolov5 model to tensorrt model and running it on docker container?

Why am i asking these: For using tensorRT i tryed to convert yolo model to tensorRt model. I used standart scripts from THIS COLAB codes on my docker container. When i tried I got same error everytime which is:

[07/05/2022-12:43:40] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:368: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/05/2022-12:43:40] [TRT] [E] parsers/onnx/ModelImporter.cpp:791: While parsing node number 203 [Resize -> "onnx::Concat_370"]:
[07/05/2022-12:43:40] [TRT] [E] parsers/onnx/ModelImporter.cpp:792: --- Begin node ---
[07/05/2022-12:43:40] [TRT] [E] parsers/onnx/ModelImporter.cpp:793: input: "onnx::Resize_365"
input: ""
input: "onnx::Resize_607"
output: "onnx::Concat_370"
name: "Resize_203"
op_type: "Resize"
attribute {
  name: "coordinate_transformation_mode"
  s: "asymmetric"
  type: STRING
}
attribute {
  name: "cubic_coeff_a"
  f: -0.75
  type: FLOAT
}
attribute {
  name: "mode"
  s: "nearest"
  type: STRING
}
attribute {
  name: "nearest_mode"
  s: "floor"
  type: STRING
}

[07/05/2022-12:43:40] [TRT] [E] parsers/onnx/ModelImporter.cpp:794: --- End node ---
[07/05/2022-12:43:40] [TRT] [E] parsers/onnx/ModelImporter.cpp:796: ERROR: parsers/onnx/builtin_op_importers.cpp:3526 In function importResize:
[8] Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"

TensorRT: export failure: failed to load ONNX file: yolov5m.onnx

TensortRT version of container: TensorRT 8.2.5.1...

used this commands


python export.py --weights yolov5m.pt --include onnx
python export.py --weights yolov5m.pt --include engine --imgsz 640 640 --device 0

After my research i tried to run this same commands on google colab. I used YOLOv5 Tutorial and it worked and i got a yolov5m.engine file

TensortRT version on Colab: TensorRT 8.4.1.5...

I thought I had succeeded. And I got the following result from here, the reason why I can't translate is related to the tensorrt version. Please correct me if I am wrong.

But when i tried to run yolov5m.engine model on yolov5 container i get some error about engine version: NVIDIA GeForce RTX 3060

YOLOv5 🚀 v6.1-277-gfdc9d91 Python-3.8.13 torch-1.13.0a0+340c412 CUDA:0 (NVIDIA GeForce RTX 3060, 12046MiB)

Loading model/yolov5m.engine for TensorRT inference...

[07/05/2022-12:55:51] [TRT] [I] [MemUsageChange] Init CUDA: CPU +472, GPU +0, now: CPU 568, GPU 784 (MiB)
[07/05/2022-12:55:51] [TRT] [I] Loaded engine size: 84 MiB
[07/05/2022-12:55:51] [TRT] [E] 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 205, Serialized Engine Version: 213)
[07/05/2022-12:55:51] [TRT] [E] 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
Rise Model Excetion: 'NoneType' object has no attribute 'num_bindings'. Cache may be out of date, try `force_reload=True` or see https://github.com/ultralytics/yolov5/issues/36 for help.

The result I have deduced from here is that I cannot run the yolov5.engine model with a lower version, since I perform the translation operations with a higher tensorRT version. Please correct me if I am wrong.

For now nvcr.io/nvidia/pytorch:XX.XX-py3 and ultralytics/yolov5 images dosen't have tensorrt version which higher than 8.2.5.1. So for these reasons i thought i need to find correct vesion for converting and deploying. or any other way to run tensorrt. Please correct me if I am wrong.

Additional

No response

github-actions[bot] commented 2 years ago

👋 Hello @mcagricaliskan, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 2 years ago

@mcagricaliskan Docker image comes with TensorRT preinstalled. All you need to do is export and then run any of the usage examples.

If you run in an environment without TRT then YOLOv5 will attempt to autoinstall. Latest version is 8 but TRT 7 may also work for some use cases.

!python export.py --weights yolov5s.pt --include engine --imgsz 640 --device 0  # export
!python detect.py --weights yolov5s.engine --imgsz 640 --device 0  # inference
glenn-jocher commented 2 years ago
Screen Shot 2022-07-05 at 3 38 48 PM
mcagricaliskan commented 2 years ago

@glenn-jocher Can you share your Dockerfile or which docker image you use as base? Because i tried this with ultralytics/yolov5:latest and it is not worked.

docker run --gpus all -it --rm --ipc=host ultralytics/yolov5:latest`
python export.py --weights yolov5s.pt --include engine --imgsz 640 --device 0

with this way i got same error

glenn-jocher commented 2 years ago

@mcagricaliskan you might need to update your Docker image:

Dockerfile is here: https://github.com/ultralytics/yolov5/blob/fdc9d9198e0dea90d0536f63b6408b97b1399cc1/utils/docker/Dockerfile#L1-L33

mcagricaliskan commented 2 years ago

image

image

not worked.

nvcr.io/nvidia/pytorch:21.06-py3 is worked fine for export, i will try to run model with it and it has trt 7.2.3.4

but why latest version of ultralytics/yolov5:latest not working on my ws?

glenn-jocher commented 2 years ago

@mcagricaliskan thanks for the screenshots. I'll add a TODO to reproduce and debug this.

glenn-jocher commented 2 years ago

TODO: Investigate possible Docker TRT export bug

mcagricaliskan commented 2 years ago

@glenn-jocher I am trying with nvcr.io/nvidia/pytorch:21.06-py3, export is OK but detect not works.

image

Do i need to add special arg for solving these error?

Also not work with results.print() too

image

mcagricaliskan commented 2 years ago

nvcr.io/nvidia/pytorch:21.11-py3 works fine

glenn-jocher commented 2 years ago

@mcagricaliskan I tested TensorRT in our current Docker image and everything works correctly. I'm unable to reproduce any issues.

Screenshot 2022-07-07 at 14 22 03
glenn-jocher commented 2 years ago

@mcagricaliskan detect.py also works correctly. Removing TODO.

Screenshot 2022-07-07 at 14 26 10

We've created a few short guidelines below to help users provide what we need in order to start investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

For Ultralytics to provide assistance your code should also be:

If you believe your problem meets all the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! 😃

mcagricaliskan commented 2 years ago

@mcagricaliskan I tested TensorRT in our current Docker image and everything works correctly. I'm unable to reproduce any issues.

Screenshot 2022-07-07 at 14 22 03

@glenn-jocher Your pytorch version is 1.11.0 but it is 1.13 in latest version of ultralytics/yolov5:latest

nvcr.io/nvidia/pytorch:21.11-py3 works because it is contains pytorch 1.11

I tried on 3 different pc with nvidia-docker2 installation and all 3 fails also they have different gpus (3060, 3060ti, 2080super)

image image image

mcagricaliskan commented 2 years ago

also pytorch 10 works well, but pytorch 12-13 not working @glenn-jocher i think it is about pytorch version, can you check it?

zhuya1996 commented 2 years ago

I also encountered this problem. I think it's the pytorch version

glenn-jocher commented 2 years ago

@mcagricaliskan yes it looks like you are correct, TRT export in Docker is broken. It appears to be PyTorch issue related, downgrading appears to resolve the issue. I'm not sure what other solution there is unfortunately.

EDIT: TODO: TRT export in Docker crashed due to torch 1.13

anjineyulutv commented 1 year ago

I am also facing the same error when I do trtexec for conversion from onnx to trt.Let me know how to resolve for version in which the error occurs

glenn-jocher commented 11 months ago

@anjineyulutv i will investigate the issue further and get back to you with a solution for the version in which the error occurs. Thank you for your patience.