ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.88k stars 16.38k forks source link

Failed to export a trained YOLOv5 model from PyTorch to TensorRT format #6707

Closed xscapex closed 2 years ago

xscapex commented 2 years ago

Search before asking

Question

Hi ultralytics,

I'm trying to export a trained YOLOv5 model from PyTorch to TensorRT format by using YoloV5 tutorial .

I could successfully export to .onnx file but could not export to .engine file and got this error ' GPU error during getBestTactic: Conv_3 : an illegal memory access was encountered'.

I notice that nvidia-tensorrt have upadated these days, not sure if this is the problem. Do you have the recommended version of nvidia-tensorrt?

Code:

!git clone https://github.com/ultralytics/yolov5 %cd yolov5 %pip install -qr requirements.txt # install

import torch from yolov5 import utils display = utils.notebook_init() # checks

!pip install -U nvidia-tensorrt --index-url https://pypi.ngc.nvidia.com # install

!python export.py --weights yolov5s.pt --include engine --imgsz 640 640 --device 0 # export

logs:

Downloading https://ultralytics.com/assets/Arial.ttf to /root/.config/Ultralytics/Arial.ttf...
export: data=data/coco128.yaml, weights=['yolov5s.pt'], imgsz=[640, 640], batch_size=1, device=0, half=False, inplace=False, train=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['engine']
YOLOv5 🚀 v6.0-273-g4de8b24 torch 1.10.0+cu111 CUDA:0 (Tesla K80, 11441MiB)

Downloading https://github.com/ultralytics/yolov5/releases/download/v6.0/yolov5s.pt to yolov5s.pt...
100% 14.0M/14.0M [00:00<00:00, 113MB/s] 

Fusing layers... 
Model Summary: 213 layers, 7225885 parameters, 0 gradients

PyTorch: starting from yolov5s.pt with output shape (1, 25200, 85) (14.7 MB)
requirements: tensorrt not found and is required by YOLOv5, attempting auto-update...
Collecting tensorrt
  Downloading tensorrt-0.0.1.tar.gz (714 bytes)
Building wheels for collected packages: tensorrt
  Building wheel for tensorrt (setup.py): started
  Building wheel for tensorrt (setup.py): finished with status 'done'
  Created wheel for tensorrt: filename=tensorrt-0.0.1-py3-none-any.whl size=1154 sha256=350c715cdab706910642d458c43892a10c908af8445d601d3e191ebf1d9412f2
  Stored in directory: /root/.cache/pip/wheels/8a/78/48/f3d96950a3a858998e81705789cdfd98298a900c264baf2b5f
Successfully built tensorrt
Installing collected packages: tensorrt
Successfully installed tensorrt-0.0.1

requirements: 1 package updated per ['tensorrt']
requirements: ⚠️ Restart runtime or rerun command for updates to take effect

requirements: onnx not found and is required by YOLOv5, attempting auto-update...
Collecting onnx
  Downloading onnx-1.11.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (12.8 MB)
Requirement already satisfied: numpy>=1.16.6 in /usr/local/lib/python3.7/dist-packages (from onnx) (1.21.5)
Requirement already satisfied: protobuf>=3.12.2 in /usr/local/lib/python3.7/dist-packages (from onnx) (3.17.3)
Requirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.7/dist-packages (from onnx) (3.10.0.2)
Requirement already satisfied: six>=1.9 in /usr/local/lib/python3.7/dist-packages (from protobuf>=3.12.2->onnx) (1.15.0)
Installing collected packages: onnx
Successfully installed onnx-1.11.0

requirements: 1 package updated per ['onnx']
requirements: ⚠️ Restart runtime or rerun command for updates to take effect

ONNX: starting export with onnx 1.11.0...
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
ONNX: export success, saved as yolov5s.onnx (29.3 MB)

TensorRT: starting export with TensorRT 8.2.3.0...
[02/20/2022-05:01:26] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 0, GPU 740 (MiB)
[02/20/2022-05:01:27] [TRT] [I] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 0 MiB, GPU 740 MiB
[02/20/2022-05:01:27] [TRT] [I] [MemUsageSnapshot] End constructing builder kernel library: CPU 0 MiB, GPU 740 MiB
[02/20/2022-05:01:27] [TRT] [I] ----------------------------------------------------------------
[02/20/2022-05:01:27] [TRT] [I] Input filename:   yolov5s.onnx
[02/20/2022-05:01:27] [TRT] [I] ONNX IR version:  0.0.7
[02/20/2022-05:01:27] [TRT] [I] Opset version:    13
[02/20/2022-05:01:27] [TRT] [I] Producer name:    pytorch
[02/20/2022-05:01:27] [TRT] [I] Producer version: 1.10
[02/20/2022-05:01:27] [TRT] [I] Domain:           
[02/20/2022-05:01:27] [TRT] [I] Model version:    0
[02/20/2022-05:01:27] [TRT] [I] Doc string:       
[02/20/2022-05:01:27] [TRT] [I] ----------------------------------------------------------------
[02/20/2022-05:01:27] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/20/2022-05:01:27] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[02/20/2022-05:01:27] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[02/20/2022-05:01:27] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
TensorRT: Network Description:
TensorRT:   input "images" with shape (1, 3, 640, 640) and dtype DataType.FLOAT
TensorRT:   output "output" with shape (1, 25200, 85) and dtype DataType.FLOAT
TensorRT:   output "350" with shape (1, 3, 80, 80, 85) and dtype DataType.FLOAT
TensorRT:   output "416" with shape (1, 3, 40, 40, 85) and dtype DataType.FLOAT
TensorRT:   output "482" with shape (1, 3, 20, 20, 85) and dtype DataType.FLOAT
TensorRT: building FP32 engine in yolov5s.engine
export.py:240: DeprecationWarning: Use build_serialized_network instead.
  with builder.build_engine(network, config) as engine, open(f, 'wb') as t:
[02/20/2022-05:01:28] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +73, now: CPU 0, GPU 813 (MiB)
[02/20/2022-05:01:28] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +34, now: CPU 0, GPU 847 (MiB)
[02/20/2022-05:01:28] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[02/20/2022-05:01:35] [TRT] [W] GPU error during getBestTactic: Conv_3 : an illegal memory access was encountered
[02/20/2022-05:01:35] [TRT] [E] 1: [resizingAllocator.cpp::deallocate::100] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
[02/20/2022-05:01:35] [TRT] [E] 1: [virtualMemoryBuffer.cpp::~StdVirtualMemoryBufferImpl::121] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
[02/20/2022-05:01:35] [TRT] [E] 10: [optimizer.cpp::computeCosts::2011] Error Code 10: Internal Error (Could not find any implementation for node Conv_3.)

TensorRT: export failure: __enter__

Many thanks.

Additional

YOLOv5:V6 OS:Ubuntu 18.04 python:3.7.12 tensorrt: 8.2.3.0

colab link with logs: https://colab.research.google.com/drive/1kCxK0w95_rELdugtJuNtDg_0OxfYLMjS?usp=sharing

github-actions[bot] commented 2 years ago

👋 Hello @xscapex, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 2 years ago

@xscapex TRT export code is shown in notebook Appendix section. I just tested this right now and everything works correctly:

!pip install -U nvidia-tensorrt --index-url https://pypi.ngc.nvidia.com  # install
!python export.py --weights yolov5s.pt --include engine --imgsz 640 640 --device 0  # export
!python detect.py --weights yolov5s.engine --imgsz 640 640 --device 0  # inference
Screenshot 2022-02-20 at 21 26 22
dhkdnduq commented 2 years ago

I installed it again.And the problem disappeared I think it's probably because it's not installed. image

xscapex commented 2 years ago

@glenn-jocher

Thank you for the prompt reply. Because we're using Colab, our GPU might be different and sometimes the export might not work.

For someone who has the same problem on Colab, try this:

!nvidia-smi

If your GPU is Tesla k80, install nvidia-tensorrt with 8.0.3.4 version, then can successfully export the file.

!pip install nvidia-tensorrt==8.0.3.4 --index-url https://pypi.ngc.nvidia.com

Or just swift the GPU to Tesla T4, it also works for me.