Open JoshuaJakowlew opened 1 year ago
Actually the error of REGTASK won't did too much problem on inference Did you install the rknn_lite and rknn_tookit2 with correct version?
Yes, I use 1.5.0 both on host and board, as stated in your repo. The only difference - my model has 640x640 resolution, instead of 640x480. After that error there are no results, here is the full log.
(env) rock@firefly:~/prog/rknn_repo$ python ./rknn_lite_inference.py
I RKNN: [17:09:47.306] RKNN Runtime Information: librknnrt version: 1.5.0 (e6fe0c678@2023-05-25T08:09:20)
I RKNN: [17:09:47.306] RKNN Driver Information: version: 0.7.2
I RKNN: [17:09:47.308] RKNN Model Information: version: 4, toolkit version: 1.5.0+1fa95b5c(compiler version: 1.5.0 (e6fe0c678@2023-05-25T16:15:03)), target: RKNPU lite, target platform: rk3568, framework name: ONNX, framework layout: NCHW, model inference type: static_shape
I RKNN: [17:09:47.810] RKNN Runtime Information: librknnrt version: 1.5.0 (e6fe0c678@2023-05-25T08:09:20)
I RKNN: [17:09:47.810] RKNN Driver Information: version: 0.7.2
I RKNN: [17:09:47.812] RKNN Model Information: version: 4, toolkit version: 1.5.0+1fa95b5c(compiler version: 1.5.0 (e6fe0c678@2023-05-25T16:15:03)), target: RKNPU lite, target platform: rk3568, framework name: ONNX, framework layout: NCHW, model inference type: static_shape
E RKNN: [17:09:54.300] failed to submit!, op id: 192, op name: Mul:Mul_248, flags: 0x5, task start: 579, task number: 15, run task counter: 7, int status: 0, please try updating to the latest version of the toolkit2 and runtime from: https://eyun.baidu.com/s/3eTDMk6Y (PWD: rknn)
W RKNN: [17:09:54.301] Output(output0): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
No Detection result
RKNN inference finish
Converted model runs fine on host via simulation. But on board everything fails.
Also I found out the following messages in dmesg:
[ 137.339648] RKNPU: job timeout, irq status: 0x0, raw status: 0x10000, require mask: 0x300, task counter: 0xc
[ 137.339700] RKNPU: soft reset
[ 163.366332] RKNPU: job timeout, irq status: 0x0, raw status: 0x0, require mask: 0x300, task counter: 0x6
[ 163.366379] RKNPU: soft reset
Looks like RKNPU fails on timeout. I have no idea how to deal with it :)
I think may be the onnx probelm
As you mentioned, failed to submit!, op id: 192, op name: Mul:Mul_248
, the layer in the middle configured wrong
Can you provide your onnx, onnxruntime, pytorch version? Also, did you convert the model with opset 12?
Yes, I used pytrorch2onnx.py
script, so model.export(format="onnx", imgsz=[input_height,input_width], opset=12)
.
I use
onnx==1.10.0
onnxoptimizer==0.2.7
onnxruntime==1.10.0
torch==1.10.1
torchvision==0.11.2
He are my models, yolov8n.pt is a default pre-trained yolo model. https://dropmefiles.net/ua/78R4Y
Which rk platform you are using, rk3588 or rk3568? If you are rk3568, you cannot use the rknn_lite.init_runtime because this only work with rk3588 platform
I use rk3568. So, what should I do to run model? I commented out line with init_runtime
- error is gone, but there are no results. Also, script runs immediately.
Here is the code I use:
import os, cv2, time, numpy as np
from utils import *
from rknnlite.api import RKNNLite
conf_thres = 0.25
iou_thres = 0.45
input_width = 640
input_height = 640
model_name = 'yolov8n'
model_path = "./model"
config_path = "./config"
result_path = "./result"
image_path = "./dataset/bus.jpg"
video_path = "test.mp4"
video_inference = False
RKNN_MODEL = f'{model_path}/{model_name}-{input_height}-{input_width}.rknn'
CLASSES = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis','snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']
isExist = os.path.exists(result_path)
if __name__ == '__main__':
isExist = os.path.exists(result_path)
if not isExist:
os.makedirs(result_path)
rknn_lite = RKNNLite(verbose=False)
ret = rknn_lite.load_rknn(RKNN_MODEL)
# ret = rknn_lite.init_runtime()
if video_inference == True:
cap = cv2.VideoCapture(video_path)
while(True):
ret, image_3c = cap.read()
if not ret:
break
print('--> Running model for video inference')
image_4c, image_3c = preprocess(image_3c, input_height, input_width)
ret = rknn_lite.init_runtime()
start = time.time()
outputs = rknn_lite.inference(inputs=[image_3c])
stop = time.time()
fps = round(1/(stop-start), 2)
outputs[0]=np.squeeze(outputs[0])
outputs[0] = np.expand_dims(outputs[0], axis=0)
colorlist = gen_color(len(CLASSES))
results = postprocess(outputs, image_4c, image_3c, conf_thres, iou_thres, classes=len(CLASSES)) ##[box,mask,shape]
results = results[0] ## batch=1
boxes, shape = results
if isinstance(boxes, np.ndarray):
vis_img = vis_result(image_3c, results, colorlist, CLASSES, result_path)
cv2.imshow("vis_img", vis_img)
print('--> Save inference result')
else:
print("No Detection result")
cv2.waitKey(10)
else:
image_3c = cv2.imread(image_path)
image_4c, image_3c = preprocess(image_3c, input_height, input_width)
ret = rknn_lite.init_runtime()
start = time.time()
outputs = rknn_lite.inference(inputs=[image_3c])
stop = time.time()
fps = round(1/(stop-start), 2)
outputs[0]=np.squeeze(outputs[0])
outputs[0] = np.expand_dims(outputs[0], axis=0)
colorlist = gen_color(len(CLASSES))
results = postprocess(outputs, image_4c, image_3c, conf_thres, iou_thres, classes=len(CLASSES)) ##[box,mask,shape]
results = results[0] ## batch=1
boxes, shape = results
print(boxes)
print(shape)
if isinstance(boxes, np.ndarray):
vis_img = vis_result(image_3c, results, colorlist, CLASSES, result_path)
print('--> Save inference result')
else:
print("No Detection result")
print("RKNN inference finish")
rknn_lite.release()
cv2.destroyAllWindows()
And it outputs
(env) rock@firefly:~/prog/rknn_repo$ python ./rknn_lite_inference.py
I RKNN: [15:19:27.056] RKNN Runtime Information: librknnrt version: 1.4.0 (a10f100eb@2022-09-09T09:07:14)
I RKNN: [15:19:27.056] RKNN Driver Information: version: 0.7.2
I RKNN: [15:19:27.058] RKNN Model Information: version: 1, toolkit version: 1.4.0-22dcfef4(compiler version: 1.4.0 (3b4520e4f@2022-09-05T20:52:35)), target: RKNPU lite, target platform: rk3568, framework name: ONNX, framework layout: NCHW
[]
[]
No Detection result
RKNN inference finish
You should run onnx2rknn_step1.py
with platform = "rk3568"
first. Then follow the guide again and remember to comment init_runtime
Yes, i used platform = "rk3568"
from the beginning. But the model still outputs nothing on real hardware :(
Hello! I'm using your repo to convert default yolov8n model to rknn format. Running
onnx2rknn_step2.py
gives me the following errors:Nevertheless, .rknn model is still produced, but running on board fails with this error:
I saw your issue on rknn-toolkit repo - you told, that you fixed this problem. As far as I understand, your repo performs two step hybrid quantization, but I have no idea, what you did later to make it work.
Here is my generated config from step1. I also tried to change float16 to int8 - the same error on real hardware, https://gist.github.com/JoshuaJakowlew/79be3060e1dd3fdb867c87d1ff5e7fd1