rockchip-linux / rknn-toolkit2

BSD 3-Clause "New" or "Revised" License
901 stars 154 forks source link

Problem with converting yolov5s.pt to onnx for rk3588 #159

Open sitnikov2020 opened 1 year ago

sitnikov2020 commented 1 year ago

I use these instruction https://github.com/rockchip-linux/rknpu2/issues/57

clone this repo https://github.com/airockchip/yolov5 then, $ python3 export.py --weights yolov5s.pt --rknpu "RK3588" --include "onnx"

then i use test.py from https://github.com/rockchip-linux/rknn-toolkit2/tree/master/examples/onnx/yolov5 and get multiple fake detections http://joxi.ru/krDjRz0hKN0472

What is the working method of converting official yolov5s.pt to rknn?

cvetaevvitaliy commented 1 year ago

Hi! If you want to train from the original yolov5 repository

git clone https://github.com/ultralytics/yolov5.git  # clone
cd yolov5
pip install -r requirements.txt

First you need to change activation SiLU to ReLU

After training you need to modify the outputs, drop the last three convolution (Conv) outputs

open your model with Netron app https://github.com/lutzroeder/netron Netron is a viewer for neural network, deep learning and machine learning models.

Screenshot from 2023-05-06 18-18-12

The model can be cropped in different ways, for example:

  1. Modify ONNX model with python script
    
    import argparse
    import onnx

def onnx_sub(): onnx.utils.extract_model(opt.onnx_input, opt.onnx_output, opt.model_input, opt.model_output)

def parse_opt(): parser = argparse.ArgumentParser() parser.add_argument('--onnx_input', type=str, default='weights/yolov5s.onnx', help='model.onnx path(s)') parser.add_argument('--onnx_output', type=str, default='weights/yolov5s_sub.onnx', help='model_sub.onnx path(s)') parser.add_argument('--model_input', '--input', nargs='+', type=str, default=["images"], help='input_names') parser.add_argument( '--model_output', '--output', nargs='+', type=str, default=["onnx::Reshape_329", "onnx::Reshape_367", "onnx::Reshape_405"], help='output_names') opt = parser.parse_args() return opt

if name == "main": opt = parse_opt() sub = onnx_sub()


2. Create a new file named `onnxcut.py` and copy the above code

Use the following command, be sure to replace the three outputs of the model with the output of the last three convolutions (Conv) of your model

3. Cut model
```bash 
python onnxcut.py --onnx_input ./runs/train/exp/weights/best.onnx --onnx_output ./runs/train/exp/weights/best_cut.onnx --model_input images --model_output onnx::Reshape_329 onnx::Reshape_367 onnx::Reshape_405

exaample Screenshot from 2023-05-06 18-36-14

  1. Convert onnx to rknn
import argparse
import cv2
import numpy as np

from rknn.api import RKNN
import os

def convert(srcFileName, dstFilename):

    # Define Rockchip CPU:
    # NPU Type 1: RK1808, RV1109, RV1126, RK3399PRO
    # NPU type 2: RK3566, RK3568, RK3588, RK3588S
    platform = "rk3588"

    print('--> Source file name: ' + srcFileName)
    print('--> RKNN file name: ' + dstFilename)

    # Create RKNN object
    rknn = RKNN()

    # Define dataset for quantization model 
    DATASET = 'data/images/dataset.txt'

    # Config: see documentation Rockchip_Quick_Start_RKNN_SDK 
    rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform=platform)

    # Load model
    print('--> Loading model')
    ret = rknn.load_onnx(srcFileName)
    if ret != 0:
        print('load model failed!')
        exit(ret)
    print('done')

    # Build model
    print('--> Building model')
    ret = rknn.build(do_quantization=True, dataset=DATASET)
    if ret != 0:
        print('build model failed.')
        exit(ret)
    print('done')

    # Export model to rknn format for Rockchip NPU
    print('--> Export rknn model')
    ret = rknn.export_rknn(dstFilename)
    if ret != 0:
        print('Export rknn model failed!')
        return ret

    print('export done')

    rknn.release()

def main():

    parser = argparse.ArgumentParser(description='transform to rknn model')
    parser.add_argument('source_file')
    parser.add_argument('description_file')
    args = parser.parse_args()

    convert(args.source_file, args.description_file)

if __name__ == '__main__':
    main()   

Create a new file named onnx2rknn.py and copy the above code

Run convert

python onnx2rknn.py runs/train/exp/weights/best.onnx runs/train/exp/weights/best.rknn

Done Screenshot from 2023-05-06 18-31-49

cvetaevvitaliy commented 1 year ago

Example train 100 epochs yolov5s with different activation SiLU ReLU LeakyReLU and convert to RKNN format yolov5s_SiLU.zip yolov5s_ReLU.zip yolov5s_LeakyReLU.zip

459737087 commented 1 year ago

still wrong

459737087 commented 1 year ago
Traceback (most recent call last):
  File "onnxcut.py", line 21, in <module>
    sub = onnx_sub()
  File "onnxcut.py", line 5, in onnx_sub
    onnx.utils.extract_model(opt.onnx_input, opt.onnx_output, opt.model_input, opt.model_output)
  File "/output/.pylibs/lib/python3.8/site-packages/onnx/utils.py", line 163, in extract_model
    extracted = e.extract_model(input_names, output_names)
  File "/output/.pylibs/lib/python3.8/site-packages/onnx/utils.py", line 124, in extract_model
    outputs = self._collect_new_outputs(output_names)
  File "/output/.pylibs/lib/python3.8/site-packages/onnx/utils.py", line 51, in _collect_new_outputs
    return self._collect_new_io_core(self.graph.output, names)  # type: ignore
  File "/output/.pylibs/lib/python3.8/site-packages/onnx/utils.py", line 41, in _collect_new_io_core
    new_io_tensors.append(self.vimap[name])
KeyError: 'onnx::Reshape_367'

@cvetaevvitaliy

Guemann-ui commented 1 year ago

same here, I got overlapped BBoxes which are wrong detection! any updates please! and why these work fine with the provided YOLO coco models and not with our custom models!

cvetaevvitaliy commented 1 year ago

Of course, I tested this script for cut and convert onnx model.

TByte007 commented 1 year ago

Same here, the yolo5s.pt generates like millions of fake detection. Btw there are no Reshape(s) in the model as far as I can see and that results in this: KeyError: 'onnx::Reshape_367'

I see no ReLU or SiLU activations. Basically everything is Sigmoid. изображение

gemaizi commented 1 month ago

i face the same problem then solove it by remove the sigmoid in the test.py, i comment the 4 sigomid function for the output ,here is the code:

def process(input, mask, anchors):    
    anchors = [anchors[i] for i in mask]
    grid_h, grid_w = map(int, input.shape[0:2])

    # box_confidence = sigmoid(input[..., 4])
    box_confidence = input[..., 4]
    box_confidence = np.expand_dims(box_confidence, axis=-1)

    # box_class_probs = sigmoid(input[..., 5:])
    box_class_probs = input[..., 5:]

    # box_xy = sigmoid(input[..., :2]) * 2 - 0.5
    box_xy = input[..., :2]*2 - 0.5

    col = np.tile(np.arange(0, grid_w), grid_w).reshape(-1, grid_w)
    row = np.tile(np.arange(0, grid_h).reshape(-1, 1), grid_h)
    col = col.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
    row = row.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
    grid = np.concatenate((col, row), axis=-1)
    box_xy += grid
    box_xy *= int(IMG_SIZE/grid_h)

    # box_wh = pow(sigmoid(input[..., 2:4]) * 2, 2)
    box_wh = pow(input[..., 2:4]*2, 2)
    box_wh = box_wh * anchors

    box = np.concatenate((box_xy, box_wh), axis=-1)

    return box, box_confidence, box_class_probs