rockchip-linux / rknn-toolkit2

BSD 3-Clause "New" or "Revised" License
900 stars 154 forks source link

Convertion error from yolov8 to RKNN #184

Closed laitathei closed 1 year ago

laitathei commented 1 year ago

Platform: torch: 1.13.1+cu116 onnx: 1.10.0 onnxruntime: 1.10.0 rknn-toolkit2 version: 1.5.0+1fa95b5c

I am trying to convert yolov8-seg.pt to rknn format The first step, i follow yolov8 official tutorial to convert it to onnx format. Here is the code

from ultralytics import YOLO
model = YOLO('yolov8n-seg.pt')
success = model.export(format='onnx',opset=12)

The second step, i try to convert onnx file to rknn format. Here is the code

from rknn.api import RKNN

if __name__ == "__main__":
    ONNX_MODEL = "./yolov8n-seg.onnx"
    RKNN_MODEL = "./yolov8n-seg.rknn"
    platform = "rk3588"
    width = 640
    height = 640

    # Create RKNN object
    rknn = RKNN()

    # pre-process config
    print("--> config model")
    #rknn.config(mean_values=[[0,0,0]], std_values=[[127.5, 127.5, 127.5]], target_platform=platform)
    rknn.config(mean_values=[[0,0,0]], std_values=[[255, 255, 255]], target_platform=platform)
    # Load model
    print("--> load model")
    ret = rknn.load_onnx(ONNX_MODEL)
    if ret != 0:
        print("load model failed!")
        exit(ret)
    print("Done")

    # Build model
    print("--> build model")
    ret = rknn.build(do_quantization=True, dataset="dataset.txt")
    if ret != 0:
        print("build model failed!")
        exit(ret)
    print("Done")

    # Export rknn model
    print("--> export rknn model")
    ret = rknn.export_rknn(RKNN_MODEL)
    if ret != 0:
        print("export model failed!")
        exit(ret)
    print("Done")

    rknn.release()

However, it pop up something warning for me.

W __init__: rknn-toolkit2 version: 1.5.0+1fa95b5c
--> config model
done
--> Loading model
Loading : 100%|████████████████████████████████████████████████| 175/175 [00:00<00:00, 50184.82it/s]
done
--> Building model
W build: found outlier value, this may affect quantization accuracy
const name                        abs_mean    abs_std     outlier value
model.22.cv3.1.1.conv.weight      0.12        0.18        -12.310     
Analysing : 100%|███████████████████████████████████████████████| 209/209 [00:00<00:00, 5965.44it/s]
Quantizating : 100%|█████████████████████████████████████████████| 209/209 [00:01<00:00, 174.88it/s]
W build: The default input dtype of 'images' is changed from 'float32' to 'int8' in rknn model for performance!
                       Please take care of this change when deploy rknn model with Runtime API!
W build: The default output dtype of 'output0' is changed from 'float32' to 'int8' in rknn model for performance!
                      Please take care of this change when deploy rknn model with Runtime API!
W build: The default output dtype of 'output1' is changed from 'float32' to 'int8' in rknn model for performance!
                      Please take care of this change when deploy rknn model with Runtime API!
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
done
--> Export RKNN model
done

After that it still provide the rknn file for me, but it cause some error during inference I am using below code to inference rknn

import numpy as np
import cv2
import time
from rknnlite.api import RKNNLite

IMG_SIZE = 640

if _name_ == '_main_':
    rknn_model = 'yolov8n-seg.rknn'
    rknn_lite = RKNNLite()
    ret = rknn_lite.load_rknn(rknn_model)
    ret = rknn_lite.init_runtime(core_mask=RKNNLite.NPU_CORE_AUTO)
    img = cv2.imread("./bus.jpg")
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    img = cv2.resize(img, dsize=(IMG_SIZE,IMG_SIZE))
    outputs = rknn_lite.inference(inputs=[img])
    print(outputs)

However, the result is wrong

I RKNN: [17:07:17.914] RKNN Runtime Information: librknnrt version: 1.5.0 (e6fe0c678@2023-05-25T08:09:20)
I RKNN: [17:07:17.915] RKNN Driver Information: version: 0.8.2
I RKNN: [17:07:17.915] RKNN Model Information: version: 4, toolkit version: 1.5.0+1fa95b5c(compiler version: 1.5.0 (e6fe0c678@2023-05-25T16:15:03)), target: RKNPU v2, target platform: rk3588, framework name: ONNX, framework layout: NCHW, model inference type: static_shape
E RKNN: [17:07:24.530] failed to submit!, op id: 139, op name: Mul:/model.22/Mul_2, flags: 0x5, task start: 382, task number: 11, run task counter: 5, int status: 0, please try updating to the latest version of the toolkit2 and runtime from: https://eyun.baidu.com/s/3eTDMk6Y (PWD: rknn)
W RKNN: [17:07:24.531] Output(output0): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
[array([[[[318.77222],
         [318.77222],
         [318.77222],
         ...,
         [318.77222],
         [318.77222],
         [318.77222]],

        [[318.77222],
         [318.77222],
         [318.77222],
         ...,
         [318.77222],
         [318.77222],
         [318.77222]],

        [[318.77222],
         [318.77222],
         [318.77222],
         ...,
         [318.77222],
         [318.77222],
         [318.77222]],

        ...,

        [[318.77222],
         [318.77222],
         [318.77222],
         ...,
         [318.77222],
         [318.77222],
         [318.77222]],

        [[318.77222],
         [318.77222],
         [318.77222],
         ...,
         [318.77222],
         [318.77222],
         [318.77222]],

        [[318.77222],
         [318.77222],
         [318.77222],
         ...,
         [318.77222],
         [318.77222],
         [318.77222]]]], dtype=float32), array([[[[ 1.3423584e-01,  2.4609905e-01,  1.5660849e-01, ...,
           3.8033491e-01,  2.9084435e-01,  4.6982548e-01],
         [ 6.9355190e-01,  5.1457077e-01,  4.2508021e-01, ...,
           9.3965095e-01,  7.3829722e-01,  7.6066983e-01],
         [ 6.0406137e-01,  4.0270755e-01,  3.5796228e-01, ...,
           1.0291415e+00,  6.9355190e-01,  7.8304249e-01],
         ...,
         [ 2.4609905e-01, -6.7117937e-02, -1.3423586e-01, ...,
           1.7898113e-01,  1.5660849e-01,  8.9490563e-02],
         [ 2.4609905e-01,  6.7117922e-02, -4.4745293e-02, ...,
           1.7898113e-01,  1.7898113e-01,  1.5660849e-01],
         [ 2.6847172e-01,  1.1186320e-01,  2.2372635e-02, ...,
           2.2372635e-02,  2.2372635e-02,  4.4745278e-02]],

        [[-1.1186322e-01, -2.2372644e-01, -2.0135379e-01, ...,
          -1.7898116e-01, -2.0135379e-01, -1.7898116e-01],
         [-1.3423586e-01, -2.4609907e-01, -2.4609907e-01, ...,
          -2.4609907e-01, -2.4609907e-01, -2.4609907e-01],
         [-1.5660851e-01, -2.4609907e-01, -2.4609907e-01, ...,
          -2.4609907e-01, -2.4609907e-01, -2.4609907e-01],
         ...,
         [-6.7117937e-02, -2.2372644e-01, -2.0135379e-01, ...,
          -2.2372644e-01, -2.0135379e-01, -2.4609907e-01],
         [-4.4745293e-02, -1.5660851e-01, -1.7898116e-01, ...,
          -2.0135379e-01, -1.7898116e-01, -2.0135379e-01],
         [-7.4505806e-09, -1.5660851e-01, -1.7898116e-01, ...,
          -1.7898116e-01, -1.5660851e-01, -1.5660851e-01]],

        [[-2.4609907e-01, -2.6847172e-01, -2.6847172e-01, ...,
          -2.6847172e-01, -2.6847172e-01, -2.6847172e-01],
         [-2.6847172e-01, -2.6847172e-01, -2.6847172e-01, ...,
          -2.6847172e-01, -2.6847172e-01, -2.6847172e-01],
         [-2.6847172e-01, -2.6847172e-01, -2.6847172e-01, ...,
          -2.6847172e-01, -2.6847172e-01, -2.6847172e-01],
         ...,
         [-2.4609907e-01, -2.6847172e-01, -2.6847172e-01, ...,
          -2.6847172e-01, -2.6847172e-01, -2.6847172e-01],
         [-2.2372644e-01, -2.6847172e-01, -2.6847172e-01, ...,
          -2.6847172e-01, -2.6847172e-01, -2.6847172e-01],
         [-2.4609907e-01, -2.6847172e-01, -2.6847172e-01, ...,
          -2.6847172e-01, -2.6847172e-01, -2.6847172e-01]],

        ...,

        [[ 5.3694344e-01,  1.1857500e+00,  1.2752407e+00, ...,
           1.7003208e+00,  1.5213397e+00,  1.2081227e+00],
         [ 6.7117929e-01,  1.4318491e+00,  1.5884576e+00, ...,
           1.9464198e+00,  1.7226934e+00,  1.3647312e+00],
         [ 7.3829722e-01,  1.3423586e+00,  1.5213397e+00, ...,
           2.1030283e+00,  1.8121841e+00,  1.5437124e+00],
         ...,
         [ 2.2372642e-01,  4.2508021e-01,  4.9219814e-01, ...,
           4.4745284e-01,  4.9219814e-01,  6.7117929e-01],
         [ 3.1321698e-01,  6.4880663e-01,  7.3829722e-01, ...,
           5.1457077e-01,  6.0406137e-01,  6.4880663e-01],
         [ 3.1321698e-01,  7.3829722e-01,  8.7253302e-01, ...,
           6.0406137e-01,  7.1592456e-01,  7.1592456e-01]],

        [[-1.5660851e-01,  6.7117922e-02,  1.1186320e-01, ...,
           1.7898113e-01,  1.3423584e-01,  3.1321698e-01],
         [ 1.7898113e-01,  7.8304249e-01,  8.0541515e-01, ...,
           8.2778776e-01,  6.2643397e-01,  6.4880663e-01],
         [ 8.9490563e-02,  6.2643397e-01,  7.8304249e-01, ...,
           8.2778776e-01,  5.3694344e-01,  6.0406137e-01],
         ...,
         [ 2.9084435e-01,  5.3694344e-01,  6.0406137e-01, ...,
           6.7117922e-02,  2.0135377e-01,  2.2372642e-01],
         [ 4.6982548e-01,  8.0541515e-01,  6.4880663e-01, ...,
          -7.4505806e-09,  8.9490563e-02,  1.5660849e-01],
         [ 4.9219814e-01,  7.1592456e-01,  6.7117929e-01, ...,
           1.3423584e-01,  2.0135377e-01,  2.2372642e-01]],

        [[-8.9490578e-02,  2.0135377e-01,  1.1186320e-01, ...,
           1.0291415e+00,  8.9490569e-01,  1.1186321e+00],
         [-2.2372644e-01,  3.5796228e-01,  2.0135377e-01, ...,
           2.0806558e+00,  1.7898114e+00,  1.7450662e+00],
         [-2.2372644e-01,  4.4745284e-01,  2.4609905e-01, ...,
           2.3043821e+00,  2.1030283e+00,  2.0359104e+00],
         ...,
         [-1.7898116e-01,  1.1186320e-01,  1.3423584e-01, ...,
           4.0270755e-01,  5.1457077e-01,  8.7253302e-01],
         [-1.5660851e-01,  4.2508021e-01,  2.9084435e-01, ...,
           2.2372642e-01,  3.3558962e-01,  8.0541515e-01],
         [ 2.2372635e-02,  6.2643397e-01,  5.3694344e-01, ...,
           3.3558962e-01,  4.2508021e-01,  7.3829722e-01]]]],
      dtype=float32)]

Can anyone give some guidelines how to convert yolov8n-seg.pt to rknn format and inference it in rk3588 platform?

Kracozebr commented 1 year ago

Any updates? face the same error with other model

 REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 
Galaxy-Ding commented 1 year ago

image

same problems for me

qmcreeper commented 1 year ago

Maybe you can change Detect() such us:

  1. replace .chunk with .split in dist2bbox()
  2. * self.strides cause a warning "op(Mul:Mul_***) has undefined broadcast type..." when qnt,so change 'Mul' broadcast type or 'self.strides' dim refer to https://github.com/rockchip-linux/rknn-toolkit2/blob/master/doc/RKNN_Compiler_Support_Operator_List_v1.5.0.pdf
laitathei commented 1 year ago

@qmcreeper I tred the first step, but it still not working to me for the second step, I don't know how to implement. Would you like to explain clearly in chinese. Thank you

Galaxy-Ding commented 1 year ago

r the second step, I don't know how to implement. Would you like to explain clearly in chinese. Thank you

thanks, i will tried it

laitathei commented 1 year ago

may be you can contact me via wechat my wechat id is wxid_y4ihk4jup2cs22, user id is hei

laitathei commented 1 year ago

issue.zip

qmcreeper commented 1 year ago

加个simplify=true,这样silu()导出的时候不会添加mul,替换激活函数的方案大概率因为这个,还是mul的broadcast不对,那个文档有详细的mul格式要求和示例

Galaxy-Ding commented 1 year ago

那么请问有试过 rknn toolkit,欲转换到 rk3399pro

laitathei commented 1 year ago

@qmcreeper I tried to add simplify=true in rknn.export_rknn(RKNN_MODEL,simplify=True) but it still produce error on mul broadcast error. After using netron to visualize onnx network structure, I found that the problem may come from the size problem. The size of mul layer that casue the error is (1,8400). How can we change it to fit rknn operator requirement?

qmcreeper commented 1 year ago

@qmcreeper I tried to add simplify=true in rknn.export_rknn(RKNN_MODEL,simplify=True) but it still produce error on mul broadcast error. After using netron to visualize onnx network structure, I found that the problem may come from the size problem. The size of mul layer that casue the error is (1,8400). How can we change it to fit rknn operator requirement?

self.strides size is (1,8400),its format is [b,n]. yo need to change the dist2bbox() output to [b,c,h,w], and repeat self.strides to [h, w] or [b,c,h,w].

Galaxy-Ding commented 1 year ago

@qmcreeper I tried to add simplify=true in rknn.export_rknn(RKNN_MODEL,simplify=True) but it still produce error on mul broadcast error. After using netron to visualize onnx network structure, I found that the problem may come from the size problem. The size of mul layer that casue the error is (1,8400). How can we change it to fit rknn operator requirement?

self.strides size is (1,8400),its format is [b,n]. yo need to change the dist2bbox() output to [b,c,h,w], and repeat self.strides to [h, w] or [b,c,h,w].

我想问下 dfl 这部分卷积,在rknntoolkit2 是可以转换的额??我看了下 rknntoolkit1 的时候 会报错,当然,你上述的改动我已经做了

image

qmcreeper commented 1 year ago

@qmcreeper I tried to add simplify=true in rknn.export_rknn(RKNN_MODEL,simplify=True) but it still produce error on mul broadcast error. After using netron to visualize onnx network structure, I found that the problem may come from the size problem. The size of mul layer that casue the error is (1,8400). How can we change it to fit rknn operator requirement?

self.strides size is (1,8400),its format is [b,n]. yo need to change the dist2bbox() output to [b,c,h,w], and repeat self.strides to [h, w] or [b,c,h,w].

我想问下 dfl 这部分卷积,在rknntoolkit2 是可以转换的额??我看了下 rknntoolkit1 的时候 会报错,当然,你上述的改动我已经做了

image

建议能改四维的全改四维,我这边1和2都能过

Galaxy-Ding commented 1 year ago

可以分享下你的onnx 吗?我目前就是 全改4维也没过


    def forward(self, x):
        """Concatenates and returns predicted bounding boxes and class probabilities."""
        shape = x[0].shape  # BCHW

        # print(f"shape is {shape}")
        for i in range(self.nl):
            x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
            print(f"{i} is {x[i].shape} ")
        if self.training:
            return x
        elif self.dynamic or self.shape != shape:
            print("strides")
            self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
            self.shape = shape
        for item in x:
            print(f"before item is {item.shape} ")

        x_temps = [xi.view(shape[0], self.no, -1).unsqueeze(0) for xi in x]
        min_a = x_temps[1].shape[-1]
        total = 0
        for item in x_temps:
            total += item.shape[-1]
        # print(f"min a is {min_a}")
        x_temps1 = x_temps[0].split(min_a, 3)
        x_temps2 = x_temps[1].split(min_a, 3)
        aa = x_temps1 + x_temps2
        x_cat1 = torch.cat(aa, 3)
        ac = [x_temps[2], x_temps[2]]
        x_cat2 = torch.cat(ac, 3)
        xcat_ = [x_cat1, x_cat2]
        x_cat = torch.cat(xcat_, 3).split(total, 3)[0]
        # if len(aa) % 2 == 0:
        #     lendivde = len(aa) / 2
        #     for i in range()
        # elif len(aa) % 3 == 0:
        #     pass
        # else:
        #     pass
        # x_temps_1 = x_temps1 + x_temps2 + x_temps[2]
        # for item in x_temps_1:
        #     print(f"after split item is {item.shape} ")

        # for item in x_temps:
        #     print(f"after item is {item.shape} ")
        # x_cat = torch.cat(x_temps, 3).squeeze(0)
        # print(f"x_cat is {x_cat.shape} ")
        # x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)
        for item in x:
            aa = item.view(shape[0], self.no, -1)
            print(f"after view item is {aa.shape} ")
        print(f"x_cat is {x_cat.shape} ")

        if self.export and self.format in ('saved_model', 'pb', 'tflite', 'edgetpu', 'tfjs'):  # avoid TF FlexSplitV ops
            box = x_cat[:, :self.reg_max * 4]
            cls = x_cat[:, self.reg_max * 4:]
        else:
            box, cls = x_cat.split((self.reg_max * 4, self.nc), 2)
        print(f"before dfl  box is {box.shape} cls shape {cls.shape}, anchors shape {self.anchors.shape}, unsqueeze {self.anchors.unsqueeze(0).unsqueeze(0).shape}")
        print(f"strides is {self.strides.shape} ")
        # dbox = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0).unsqueeze(0), xywh=True, dim=1) * self.strides
        dbox_temp = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0).unsqueeze(0), xywh=True, dim=2)
        strides_temp = self.strides.expand(dbox_temp.shape[-2], dbox_temp.shape[-1])
        dbox = dbox_temp * strides_temp

        if self.export and self.format in ('tflite', 'edgetpu'):
            # Normalize xywh with image size to mitigate quantization error of TFLite integer models as done in YOLOv5:
            # https://github.com/ultralytics/yolov5/blob/0c8de3fca4a702f8ff5c435e67f378d1fce70243/models/tf.py#L307-L309
            # See this PR for details: https://github.com/ultralytics/ultralytics/pull/1695
            img_h = shape[2] * self.stride[0]
            img_w = shape[3] * self.stride[0]
            img_size = torch.tensor([img_w, img_h, img_w, img_h], device=dbox.device).reshape(1, 4, 1)
            dbox /= img_size

        y = torch.cat((dbox, cls.sigmoid()), 2).squeeze(0)

https://github.com/rockchip-linux/rknn-toolkit/issues/391#issuecomment-1663168816 我在rknn1 也提过类似的问题

head 部分代码我已经改的很多了。。

laitathei commented 1 year ago

@qmcreeper 我是新手, 不知道該如何把(1, 8400)改成4維改成(b,c,h,w)又可以符合最後輸出大小(1, 116, 8400) 你能把Detect的部分分享一下嗎,感謝大佬

Galaxy-Ding commented 1 year ago

@qmcreeper I tried to add simplify=true in rknn.export_rknn(RKNN_MODEL,simplify=True) but it still produce error on mul broadcast error. After using netron to visualize onnx network structure, I found that the problem may come from the size problem. The size of mul layer that casue the error is (1,8400). How can we change it to fit rknn operator requirement?

self.strides size is (1,8400),its format is [b,n]. yo need to change the dist2bbox() output to [b,c,h,w], and repeat self.strides to [h, w] or [b,c,h,w].

我想问下 dfl 这部分卷积,在rknntoolkit2 是可以转换的额??我看了下 rknntoolkit1 的时候 会报错,当然,你上述的改动我已经做了 image

建议能改四维的全改四维,我这边1和2都能过

或者您这边 rknn 的版本是多少, y8 是最新的吗?还是那个commit id?

laitathei commented 1 year ago

i think i solve this problem

E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf

but it still cannot have any segmentation result. Will this parameter affect the model result?

W build: found outlier value, this may affect quantization accuracy
const name                        abs_mean    abs_std     outlier value
model.22.cv3.1.1.conv.weight      0.12        0.18        -12.310   
qmcreeper commented 1 year ago

i think i solve this problem

E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf

but it still cannot have any segmentation result. Will this parameter affect the model result?

W build: found outlier value, this may affect quantization accuracy
const name                        abs_mean    abs_std     outlier value
model.22.cv3.1.1.conv.weight      0.12        0.18        -12.310   

Mixed-Precision Quantization, fp16 instead of int8

Galaxy-Ding commented 1 year ago

@qmcreeper I tried to add simplify=true in rknn.export_rknn(RKNN_MODEL,simplify=True) but it still produce error on mul broadcast error. After using netron to visualize onnx network structure, I found that the problem may come from the size problem. The size of mul layer that casue the error is (1,8400). How can we change it to fit rknn operator requirement?

self.strides size is (1,8400),its format is [b,n]. yo need to change the dist2bbox() output to [b,c,h,w], and repeat self.strides to [h, w] or [b,c,h,w].

我想问下 dfl 这部分卷积,在rknntoolkit2 是可以转换的额??我看了下 rknntoolkit1 的时候 会报错,当然,你上述的改动我已经做了 image

建议能改四维的全改四维,我这边1和2都能过

结果我这边出来了,应该是包含了 y8 dfl image image image

主要是改了这几个地方,但是输出的结果 并没有人和 indces

image

laitathei commented 1 year ago

I think I solved this issue. The process of Using Yolov8 official guide to convert its onnx file will not cause any problem. The main problem come from onnx to rknn. You have to follow hybrid quantization method to get the rknn, The hybrid quantization method inside https://github.com/rockchip-linux/rknn-toolkit2/blob/master/doc/Rockchip_User_Guide_RKNN_Toolkit2_CN-1.5.0.pdf this doc.

Jeremiah0425 commented 12 months ago

i think i solve this problem

E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf
E RKNN: [16:41:14.271] REGTASK: The bit width of field value exceeds the limit, target: v2, offset: 0x4030, shift = 0, limit: 0x1fff, value: 0x20cf

but it still cannot have any segmentation result. Will this parameter affect the model result?

W build: found outlier value, this may affect quantization accuracy
const name                        abs_mean    abs_std     outlier value
model.22.cv3.1.1.conv.weight      0.12        0.18        -12.310   

hello,How did you solve this problem?

Jeremiah0425 commented 12 months ago

I think I solved this issue. The process of Using Yolov8 official guide to convert its onnx file will not cause any problem. The main problem come from onnx to rknn. You have to follow hybrid quantization method to get the rknn, The hybrid quantization method inside https://github.com/rockchip-linux/rknn-toolkit2/blob/master/doc/Rockchip_User_Guide_RKNN_Toolkit2_CN-1.5.0.pdf this doc.

i have followed hybrid quantization method to get the rknn, but problem not solved

letessarini commented 9 months ago

Anyone managed to solve this problem: --> Running model for camera inference I RKNN: [13:52:25.482] RKNN Runtime Information: librknnrt version: 1.5.2 (c6b7b351a@2023-08-23T15:28:22) I RKNN: [13:52:25.482] RKNN Driver Information: version: 0.8.2 W RKNN: [13:52:25.482] Current driver version: 0.8.2, recommend to upgrade the driver to the new version: >= 0.8.8 I RKNN: [13:52:25.484] RKNN Model Information: version: 6, toolkit version: 1.5.2+b642f30c(compiler version: 1.5.2 (c6b7b351a@2023-08-23T07:39:01)), target: RKNPU v2, target platform: rk3588, framework name: ONNX, framework layout: NCHW, model inference type: static_shape W RKNN: [13:52:25.707] Output(output0): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.