Closed Fricodelco closed 9 months ago
I got the same error, and it seems to be related to the Softmax
operator in the DFL
layer, which was just added to the support for RK3588 in toolkit v1.6.0. (Error disappears when softmax operator is removed)
idk if there is any alternative function for softmax, or, if a maintainer could check whether it's an underlying logic bug?
@(...whom?)
I got the same error, and it seems to be related to the
Softmax
operator in theDFL
layer, which was just added to the support for RK3588 in toolkit v1.6.0. (Error disappears when softmax operator is removed) idk if there is any alternative function for softmax, or, if a maintainer could check whether it's an underlying logic bug? @(...whom?)
This is weird because when i changed to yolov5 everything works good for me. But this is really a critical bug, because yolov8 is the state of the art now.
I'm also running into this issue when converting a yolov8n coco 2017 model.
+1
yolov8s onnx to rknn
(base) zx@zx-virtual-machine:~/RKNN/rknn_model_zoo/examples/yolov8_seg/python$ python convert.py ../model/yolov8s-seg.onnx rk3588 W init: rknn-toolkit2 version: 1.6.0+81f21f4d --> Config model done --> Loading model W load_onnx: It is recommended onnx opset 19, but your onnx model opset is 17! W load_onnx: Model converted from pytorch, 'opset_version' should be set 19 in torch.onnx.export for successful convert! Loading : 100%|███████████████████████████████████████████████| 178/178 [00:00<00:00, 173021.12it/s] done --> Building model GraphPreparing : 100%|██████████████████████████████████████████| 206/206 [00:00<00:00, 2317.03it/s] Quantizating : 100%|██████████████████████████████████████████████| 206/206 [00:05<00:00, 35.04it/s] W build: The default input dtype of 'images' is changed from 'float32' to 'int8' in rknn model for performance! Please take care of this change when deploy rknn model with Runtime API! W build: The default output dtype of 'output0' is changed from 'float32' to 'int8' in rknn model for performance! Please take care of this change when deploy rknn model with Runtime API! W build: The default output dtype of 'output1' is changed from 'float32' to 'int8' in rknn model for performance! Please take care of this change when deploy rknn model with Runtime API! E RKNN: [09:01:08.041] failed to config argb mode layer! Aborted (core dumped)
I try opset=12 ,to rknn is success。 but run error 。
load lable ./model/coco_80_labels_list.txt E RKNN: [10:41:39.851] 6, 1 E RKNN: [10:41:39.851] Invalid RKNN model version 6 E RKNN: [10:41:39.851] rknn_init, load model failed! rknn_init fail! ret=-1 init_yolov8_seg_model fail! ret=-1 model_path=model/yolov8-seg.rknn
i try update pytorch , opset=19. now Still returned to the original error (failed to config argb mode layer!)
The answer is here。
Using this repositorie exported onnx is available
The answer is here。
Using this repositorie exported onnx is available
thank you! ill try this and feedback you
The answer is here。
Using this repositorie exported onnx is available
this solution works for me, clothing for now !
how did it work? can you offer a more detailed instruction?
how did it work? can you offer a more detailed instruction?
you need to download this fork of yolov8 https://github.com/airockchip/ultralytics_yolov8. Then convert your pt model to onnx using this fork (read all README), and then convert onnx to rknn using zoo
答案就在这里。
使用此存储库导出的 onnx 是可用的
yolov8-pose的模型是否能够使用这个来修复这个问题
答案就在这里。 使用此存储库导出的 onnx 是可用的 https://github.com/airockchip/ultralytics_yolov8
yolov8-pose的模型是否能够使用这个来修复这个问题
你好,我没有试过pose的模型。不过还是想顺便请教下,你使用pose做动作捕捉还是 行为检测呢?如果是行为检测 那么使用什么来检测动作比较好呢?时序模型 还是其他?
答案就在这里。 使用此存储库导出的 onnx 是可用的 https://github.com/airockchip/ultralytics_yolov8
yolov8-pose的模型是否能够使用这个来修复这个问题
你好,我没有试过pose的模型。不过还是想顺便请教下,你使用pose做动作捕捉还是 行为检测呢?如果是行为检测 那么使用什么来检测动作比较好呢?时序模型 还是其他?
我用pose只是用于关键点检测并通过pnp算法计算出目标的三维位姿,并没有动作捕捉或者行为检测
how did it work? can you offer a more detailed instruction?
you need to download this fork of yolov8 https://github.com/airockchip/ultralytics_yolov8. Then convert your pt model to onnx using this fork (read all README), and then convert onnx to rknn using zoo
Hi, why is it that instead of yolov8n.onnx model I export yolov8n.torchscript model using this repository?
嗨,尝试导出姿势模型(yolov8n-pose 或更大的模型)时,导出失败并显示以下错误:
RuntimeError: The size of tensor a (8400) must match the size of tensor b (6174) at non-singleton dimension 3
有人有解决办法吗?
没有,我也想问怎么解决
how did it work? can you offer a more detailed instruction?
you need to download this fork of yolov8 https://github.com/airockchip/ultralytics_yolov8. Then convert your pt model to onnx using this fork (read all README), and then convert onnx to rknn using zoo
Hi, why is it that instead of yolov8n.onnx model I export yolov8n.torchscript model using this repository?
I followed the README and it didn't work I created a new conda env and the following command worked on my device
pip install protobuf
pip install git+https://github.com/airockchip/ultralytics_yolov8.git@main
yolo export model=path_to_pt_file format=rknn opset=19
how did it work? can you offer a more detailed instruction?
you need to download this fork of yolov8 https://github.com/airockchip/ultralytics_yolov8. Then convert your pt model to onnx using this fork (read all README), and then convert onnx to rknn using zoo
Hi, why is it that instead of yolov8n.onnx model I export yolov8n.torchscript model using this repository?
I have same issue here
how did it work? can you offer a more detailed instruction?
you need to download this fork of yolov8 https://github.com/airockchip/ultralytics_yolov8. Then convert your pt model to onnx using this fork (read all README), and then convert onnx to rknn using zoo
Hi, why is it that instead of yolov8n.onnx model I export yolov8n.torchscript model using this repository?
Hello, Rockchip has modified the output layer of Yolov8 in rknn model zoo. I think the reason is to allow a better quantization to int8 and int4, because the class ids and the coordinates were in the same vector but with different scales so there was a performance drop. In any case, this modification does not require to retrain a network, you can just convert your weights to onnx using their fork of ultralytics first, then to rknn.
嗨,尝试导出姿势模型(yolov8n-pose 或更大的模型)时,导出失败并显示以下错误:
RuntimeError: The size of tensor a (8400) must match the size of tensor b (6174) at non-singleton dimension 3
有人有解决办法吗?没有,我也想问怎么解决
大佬解决了没有
嗨,尝试导出姿势模型(yolov8n-pose 或更大的模型)时,导出失败并显示以下错误:
RuntimeError: The size of tensor a (8400) must match the size of tensor b (6174) at non-singleton dimension 3
有人有解决办法吗?没有,我也想问怎么解决
大佬解决了没有
No, I did not export Yolov8-seg on the NPU yet. It is required to implement it in rknn-zoo. However I did succeed to deploy it on the Mali GPU and it was fast enough. I used TVM.
Recently i tried to export my Yolov8-seg from onnx to rknn for rk3588 and it broke after quantization with this error: E RKNN: [09:47:19.149] failed to config argb mode layer! Aborted (core dumped) I tried different dtype and it didnt help I tried to quantitize in 2 steps with hybrid_quantization_step and its broke on step 2 I also tried to convert for rk3562/66/68 and it works well How can i fix this?
W init: rknn-toolkit2 version: 1.6.0+81f21f4d