Z-Xiong / LightTrack-rknn

RKNN version demo of [CVPR21] LightTrack: Finding Lightweight Neural Network for Object Tracking via One-Shot Architecture Search
3 stars 0 forks source link

Convert RKNN Model #3

Open b21827043 opened 1 year ago

b21827043 commented 1 year ago

Hello, first of all thanks for the repo. I converted the model from your light track ncnn repo to init , head , neck and backbone onnx models. I want to convert these models to rknn models but I'm a bit confused. What should I write to the mean and std values? And in the code below ret = rknn.build(do_quantization=QUANTIZE_ON, dataset=DATASET)

What image resolution should I choose as a dataset?

Thanks.

Z-Xiong commented 1 year ago

Thank for yor attention about my repo. It's a little hard to understand. init: (1, 3, 127, 127) backbone: (1, 3, 256, 256) head: (1, 96, 8, 8), (1 96, 18, 18) For head, you need to build a *.npy file as dataset, because it has two input tensors. And I sugguest use hybrid quantization.

b21827043 commented 1 year ago

Hello sir, can you share a code about how to convert to rknn in your available time? I'm having trouble understanding. Thanks in advance.

Z-Xiong commented 1 year ago

I upload the convert code, as follows: https://github.com/Z-Xiong/LightTrack-rknn/commit/e83bcae65c29a00897c7b037f755ff4d75d4d076 You need build own dataset. In particular, note that the neck_head dataset need use *.npy as input tensor, because it's input shape are (1, 96, 8, 8), (1 96, 18, 18).