Open mcagricaliskan opened 2 years ago
👋 Hello @mcagricaliskan, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.
For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.
Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.
@mcagricaliskan Docker image comes with TensorRT preinstalled. All you need to do is export and then run any of the usage examples.
If you run in an environment without TRT then YOLOv5 will attempt to autoinstall. Latest version is 8 but TRT 7 may also work for some use cases.
!python export.py --weights yolov5s.pt --include engine --imgsz 640 --device 0 # export
!python detect.py --weights yolov5s.engine --imgsz 640 --device 0 # inference
@glenn-jocher Can you share your Dockerfile or which docker image you use as base? Because i tried this with ultralytics/yolov5:latest
and it is not worked.
docker run --gpus all -it --rm --ipc=host ultralytics/yolov5:latest`
python export.py --weights yolov5s.pt --include engine --imgsz 640 --device 0
with this way i got same error
@mcagricaliskan you might need to update your Docker image:
sudo docker pull ultralytics/yolov5:latest
to update your image Dockerfile is here: https://github.com/ultralytics/yolov5/blob/fdc9d9198e0dea90d0536f63b6408b97b1399cc1/utils/docker/Dockerfile#L1-L33
not worked.
nvcr.io/nvidia/pytorch:21.06-py3 is worked fine for export, i will try to run model with it and it has trt 7.2.3.4
but why latest version of ultralytics/yolov5:latest not working on my ws?
@mcagricaliskan thanks for the screenshots. I'll add a TODO to reproduce and debug this.
TODO: Investigate possible Docker TRT export bug
@glenn-jocher I am trying with nvcr.io/nvidia/pytorch:21.06-py3
, export is OK but detect not works.
Do i need to add special arg for solving these error?
Also not work with results.print() too
nvcr.io/nvidia/pytorch:21.11-py3
works fine
@mcagricaliskan I tested TensorRT in our current Docker image and everything works correctly. I'm unable to reproduce any issues.
@mcagricaliskan detect.py also works correctly. Removing TODO.
We've created a few short guidelines below to help users provide what we need in order to start investigating a possible problem.
When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:
For Ultralytics to provide assistance your code should also be:
git pull
or git clone
a new copy to ensure your problem has not already been solved in master.If you believe your problem meets all the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem.
Thank you! 😃
@mcagricaliskan I tested TensorRT in our current Docker image and everything works correctly. I'm unable to reproduce any issues.
@glenn-jocher Your pytorch version is 1.11.0 but it is 1.13 in latest version of ultralytics/yolov5:latest
nvcr.io/nvidia/pytorch:21.11-py3
works because it is contains pytorch 1.11
I tried on 3 different pc with nvidia-docker2 installation and all 3 fails also they have different gpus (3060, 3060ti, 2080super)
also pytorch 10 works well, but pytorch 12-13 not working @glenn-jocher i think it is about pytorch version, can you check it?
I also encountered this problem. I think it's the pytorch version
@mcagricaliskan yes it looks like you are correct, TRT export in Docker is broken. It appears to be PyTorch issue related, downgrading appears to resolve the issue. I'm not sure what other solution there is unfortunately.
EDIT: TODO: TRT export in Docker crashed due to torch 1.13
I am also facing the same error when I do trtexec for conversion from onnx to trt.Let me know how to resolve for version in which the error occurs
@anjineyulutv i will investigate the issue further and get back to you with a solution for the version in which the error occurs. Thank you for your patience.
Search before asking
Question
Hello Everyone,
These days i am trying to run Yolov5 with TensorRT on Docker.
I didn't install TensorRT to my ubuntu because i want to run yolov5 on docker. I tried to use nvcr.io/nvidia/pytorch:22.06-py3 and ultralytics/yolov5 base images. I succesfly runned yolov5m on docker container. but i need more performance because i want to run huge number of video feed with yolov5 for these reason i decied trying to reach TensorRT yolov5 speed.
My graphic card is: RTX 3060
My Question: Which TensorRT version is correct for converting yolov5 model to tensorrt model and running it on docker container?
Why am i asking these: For using tensorRT i tryed to convert yolo model to tensorRt model. I used standart scripts from THIS COLAB codes on my docker container. When i tried I got same error everytime which is:
TensortRT version of container: TensorRT 8.2.5.1...
used this commands
After my research i tried to run this same commands on google colab. I used YOLOv5 Tutorial and it worked and i got a yolov5m.engine file
TensortRT version on Colab: TensorRT 8.4.1.5...
I thought I had succeeded. And I got the following result from here, the reason why I can't translate is related to the tensorrt version. Please correct me if I am wrong.
But when i tried to run yolov5m.engine model on yolov5 container i get some error about engine version: NVIDIA GeForce RTX 3060
The result I have deduced from here is that I cannot run the yolov5.engine model with a lower version, since I perform the translation operations with a higher tensorRT version. Please correct me if I am wrong.
For now nvcr.io/nvidia/pytorch:XX.XX-py3 and ultralytics/yolov5 images dosen't have tensorrt version which higher than 8.2.5.1. So for these reasons i thought i need to find correct vesion for converting and deploying. or any other way to run tensorrt. Please correct me if I am wrong.
Additional
No response