Open SHIDi233 opened 3 months ago
[x] export tolov10 model to onnx.
git clone https://github.com/THU-MIG/yolov10
cd yolov10
conda create -n yolov10 python=3.9
conda activate yolov10
pip install -r requirements.txt
pip install -e .
modify ultralytics/engine/exporter.py as follows:
# line 369
output_names = ["output0", "output1"] if isinstance(self.model, SegmentationModel) else ["output"]
dynamic = True
if dynamic:
dynamic = {'images': {0: 'batch'}}
if isinstance(self.model, SegmentationModel):
dynamic['output0'] = {0: 'batch', 2: 'anchors'}
dynamic['output1'] = {0: 'batch', 2: 'mask_height', 3: 'mask_width'}
elif isinstance(self.model, DetectionModel):
dynamic['output'] = {0: 'batch'}
export onnx model There is an unknow problem, so use this pyscript to download your model.
# Import libraries
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
from zipfile import ZipFile from urllib.request import urlretrieve import argparse
parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument( '--model', choices=['yolov10n', 'yolov10s', 'yolov10m', 'yolov10b', 'yolov10l', 'yolov10x'], default='yolov10n', help='Model to download' )
args = parser.parse_args()
def download_model(model):
url = "https://github.com/THU-MIG/yolov10/releases/download/v1.1/" + model + ".pt"
# Downloading zip file using urllib package.
print("Downloading the model...")
urlretrieve(url, model + ".pt")
print("Model downloaded successfully!")
download_model(args.model)
use this command to download.
```bash
python download.py --model yolov10n
If downloaded, then start to convert it to onnx.
yolo export model=yolov10n.pt format=onnx format=onnx opset=13 simplify
reference to: https://github.com/l-sf/Notes/blob/main/notes/Ubuntu20.04_install_tutorials.md#%E4%BA%94cuda--cudnn--tensorrt-install
cd Linfer/workspace
# 修改其中的onnx路径
bash compile_engine.sh
build
cd Linfer
mkdir build && cd build
cmake .. && make -j4
run
cd Linfer/workspace
./pro
Attemp to use Yolo v10 and integrate it into tensorRT. Now find out a repository (https://github.com/l-sf/Linfer) to use C++ and tensorRT to speed up the detecting action. This afternoon I'll complete and test it.