Closed GitEasonXu closed 2 years ago
Please upload the error mesasge. Ipython didn't include model weights and environment setting.
Could you please tell us the error message?
The detailed steps of YOLOv6-Dynamic-Batch-onnxruntime.ipynb when I try to run.
commit ID: e302cc815d44fc99aa7c7eff36bc2a42a3901875
weights/yolov6s.pt
But, I get such error.
usage: export_onnx.py [-h] [--weights WEIGHTS]
[--img-size IMG_SIZE [IMG_SIZE ...]]
[--batch-size BATCH_SIZE] [--half] [--inplace]
[--simplify] [--dynamic-batch] [--end2end]
[--trt-version TRT_VERSION] [--ort] [--with-preprocess]
[--topk-all TOPK_ALL] [--iou-thres IOU_THRES]
[--conf-thres CONF_THRES] [--device DEVICE]
export_onnx.py: error: unrecognized arguments: --max-wh 7680
This is obviously due to parameter mismatch. You can easily reproduce this problem.
Sorry for for the inconvince, I have update the document. In the new version of code, we do not need to pass the --max-wh
parameters.
Thank you so much, but there are still some problems.
!python deploy/ONNX/export_onnx.py \
--weights weights/yolov6s.pt \
--end2end --simplify \
--topk-all 100 \
--iou-thres 0.65 \
--conf-thres 0.35 \
--img-size 640 640 \
--dynamic-batch
Using this cmd can successfuly export onnx model. export logs detail:
Output exceeds the [size limit](command:workbench.action.openSettings?%5B%22notebook.output.textLineLimit%22%5D). Open the full output data[ in a text editor](command:workbench.action.openLargeOutput?072392c7-096a-4029-8fde-e3d150e2e867)
Namespace(batch_size=1, conf_thres=0.35, device='0', dynamic_batch=True, end2end=True, half=False, img_size=[640, 640], inplace=False, iou_thres=0.65, ort=False, simplify=True, topk_all=100, trt_version=8, weights='weights/yolov6s.pt', with_preprocess=False)
===================
End2End(
(model): Model(
(backbone): EfficientRep(
(stem): RepVGGBlock(
(nonlinearity): ReLU(inplace=True)
(se): Identity()
(rbr_reparam): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
)
(ERBlock_2): Sequential(
(0): RepVGGBlock(
(nonlinearity): ReLU(inplace=True)
(se): Identity()
(rbr_reparam): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
)
(1): RepBlock(
(conv1): RepVGGBlock(
(nonlinearity): ReLU(inplace=True)
(se): Identity()
(rbr_reparam): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(block): Sequential(
(0): RepVGGBlock(
(nonlinearity): ReLU(inplace=True)
...
)
(end2end): ONNX_TRT8()
)
===================
Output exceeds the [size limit](command:workbench.action.openSettings?%5B%22notebook.output.textLineLimit%22%5D). Open the full output data[ in a text editor](command:workbench.action.openLargeOutput?9ef76676-9c26-48f6-bf6a-110eaa18bc08)
Loading checkpoint from weights/yolov6s.pt
Fusing model...
c:\Users\Len.Xu\.conda\envs\pytorch\lib\site-packages\torch\functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Starting to export ONNX...
d:\Project\ChipOne\YOLOv6\yolov6\assigners\anchor_generator.py:12: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
for i, stride in enumerate(fpn_strides):
WARNING: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
Starting to simplify ONNX...
Simplifier failure: D:\a\onnx-simplifier\onnx-simplifier\third_party\onnx-optimizer\third_party\onnx\onnx/common/ir.h:527: input: Assertion `inputs_.size() == 1` failed.
ONNX export success, saved as weights/yolov6s.onnx
...
You can export tensorrt engine use trtexec tools.
Command is:
trtexec --onnx=weights/yolov6s.onnx --saveEngine=weights/yolov6s.engine --minShapes=images:1x3x640x640 --optShapes=images:16x3x640x640 --maxShapes=images:32x3x640x640 --shapes=images:16x3x640x640
But when I running this cell session = ort.InferenceSession(w, providers=providers)
,I get such error:
C:\Users\Len.Xu\AppData\Roaming\Python\Python38\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:54: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'CPUExecutionProvider'
warnings.warn(
---------------------------------------------------------------------------
Fail Traceback (most recent call last)
Cell In [11], line 2
1 providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider']
----> 2 session = ort.InferenceSession(w, providers=providers)
File ~\AppData\Roaming\Python\Python38\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:347, in InferenceSession.__init__(self, path_or_bytes, sess_options, providers, provider_options, **kwargs)
344 disabled_optimizers = kwargs["disabled_optimizers"] if "disabled_optimizers" in kwargs else None
346 try:
--> 347 self._create_inference_session(providers, provider_options, disabled_optimizers)
348 except ValueError:
349 if self._enable_fallback:
File ~\AppData\Roaming\Python\Python38\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:384, in InferenceSession._create_inference_session(self, providers, provider_options, disabled_optimizers)
382 session_options = self._sess_options if self._sess_options else C.get_default_session_options()
383 if self._model_path:
--> 384 sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
385 else:
386 sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from weights/yolov6s.onnx failed:Fatal error: TRT:EfficientNMS_TRT(-1) is not a registered function/op
So I refer to the README.md
to try to fix this problem,using such cmd
!python deploy/ONNX/export_onnx.py \
--weights weights/yolov6s.pt \
--end2end --simplify \
--topk-all 100 \
--iou-thres 0.65 \
--conf-thres 0.35 \
--img-size 640 640 \
--dynamic-batch \
--ort
So far, there is no problem, but an error occurred when getting the results.
for i,(batch_id,x0,y0,x1,y1,cls_id,score) in enumerate(out):
if batch_id >= 5:
break
image = origin_RGB[int(batch_id)]
ratio,dwdh = resize_data[int(batch_id)][1:]
box = np.array([x0,y0,x1,y1])
box -= np.array(dwdh*2)
box /= ratio
box = box.round().astype(np.int32).tolist()
cls_id = int(cls_id)
score = round(float(score),3)
name = names[cls_id]
color = colors[name]
name += ' '+str(score)
cv2.rectangle(image,box[:2],box[2:],color,2)
cv2.putText(image,name,(box[0], box[1] - 2),cv2.FONT_HERSHEY_SIMPLEX,0.75,[225, 255, 255],thickness=2)
error details:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [31], line 1
----> 1 for i,(batch_id,x0,y0,x1,y1,cls_id,score) in enumerate(out):
2 if batch_id >= 5:
3 break
ValueError: not enough values to unpack (expected 7, got 1)
If you pass--ort
command when convert to onnx model, the output of onnx model is ['num_dets', 'det_boxes', 'det_scores', 'det_classes']
when you run out = session.run(outname,{'images':im})
, you can get detection bboxes from the out
.
We will fix our document later.
If you have other problem, welcode to join our wechat group to discuss it.
I have fixed this problem. There may be three things that need to be modified in YOLOv6-Dynamic-Batch-onnxruntime.ipynb
.
Using such cmd
to export onnx end2end runtime.
!python export_onnx.py --weights weights/yolov6s.pt \
--saved weights/yolov6s_dynamic.onnx \
--end2end --simplify \
--topk-all 100 \
--iou-thres 0.65 \
--conf-thres 0.35 \
--img-size 640 640 \
--dynamic-batch \
--ort
To change letterbox
function.
def letterbox(im, new_shape=(640, 640), color=(114, 114, 114)):
# Resize and pad image while meeting stride-multiple constraints
shape = im.shape[:2] # current shape [height, width]
if isinstance(new_shape, int):
new_shape = (new_shape, new_shape)
# Scale ratio (new / old)
r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
# Compute padding
new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
dw /= 2 # divide padding into 2 sides
dh /= 2
im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
return im, r, (dw, dh)
Using such code to analyze the inference results.
for i in range(out[0].shape[0]):
obj_num = out[0][i]
boxes = out[1][i]
scores = out[2][i]
cls_id = out[3][i]
image = origin_RGB[i]
img_h, img_w = image.shape[:2]
ratio, dwdh = resize_data[i][1:]
for num in range(obj_num[0]):
box = boxes[num]
score = round(float(scores[num]),3)
obj_name = names[int(cls_id[num])]
box -= np.array(dwdh*2)
box /= ratio
box = box.round().astype(np.int32).tolist()
x1 = max(0, box[0])
y1 = max(0, box[1])
x2 = min(img_w, box[2])
y2 = min(img_h, box[3])
color = colors[obj_name]
obj_name += ' '+str(score)
cv2.rectangle(image,(x1, y1),(x2, y2),color,2)
cv2.putText(image,obj_name,(box[0], box[1] - 2),cv2.FONT_HERSHEY_SIMPLEX,0.75,[225, 255, 255],thickness=2)
Before Asking
[X] I have read the README carefully. 我已经仔细阅读了README上的操作指引。
[X] I want to train my custom dataset, and I have read the tutorials for training your custom data carefully and organize my dataset correctly; (FYI: We recommand you to apply the config files of xx_finetune.py.) 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。(FYI: 我们推荐使用xx_finetune.py等配置文件训练自定义数据集。)
[X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
Search before asking
Question
YOLOv6-Dynamic-Batch-onnxruntime.ipynb can't run successfully.
Parameters is wrong!! Can you test it before you release. There are all bugs!!
Additional
No response