Open openedev opened 9 months ago
I was successful with following steps.
Export your .pt model to .onnx using the custom method described at https://github.com/airockchip/ultralytics_yolov8/blob/main/RKOPT_README.md
model:, data:, and classes:
. However, I don't think it matters what data:
or classes:
are set to.Convert the model to rknn as before with convert.py
Before compiling with build-linux.sh make two modifications.
OBJ_CLASS_NUM
in postprocess.h to comport with your model .As I mentioned I did use below command instead of your step 1
yolo export model=./best.pt imgsz=640,640 format=onnx opset=12
Anything incorrect in onnx conversion?
In fact I did try your step 1 by changing the model but found below issue while testing onnx.
$ python test.py
WARNING ⚠️ Unable to automatically guess model task, assuming 'task=detect'. Explicitly define task for your model, i.e. 'task=detect', 'segment', 'classify','pose' or 'obb'.
Loading plant1v8.onnx for ONNX Runtime inference...
WARNING ⚠️ Metadata not found for 'model=plant1v8.onnx'
Traceback (most recent call last):
File "/home/build/shared/test.py", line 7, in <module>
results = model(['plant.jpg']) # return a list of Results objects
^^^^^^^^^^^^^^^^^^^^
File "/home/build/conda/lib/python3.11/site-packages/ultralytics/engine/model.py", line 169, in __call__
return self.predict(source, stream, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/build/conda/lib/python3.11/site-packages/ultralytics/engine/model.py", line 439, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/build/conda/lib/python3.11/site-packages/ultralytics/engine/predictor.py", line 206, in __call__
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/build/conda/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
response = gen.send(None)
^^^^^^^^^^^^^^
File "/home/build/conda/lib/python3.11/site-packages/ultralytics/engine/predictor.py", line 292, in stream_inference
self.results = self.postprocess(preds, im, im0s)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/build/conda/lib/python3.11/site-packages/ultralytics/models/yolo/detect/predict.py", line 25, in postprocess
preds = ops.non_max_suppression(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/build/conda/lib/python3.11/site-packages/ultralytics/utils/ops.py", line 239, in non_max_suppression
x = x[xc[xi]] # confidence
~^^^^^^^^
IndexError: The shape of the mask [80, 80] at index 0 does not match the shape of the indexed tensor [64, 80, 80] at index 0
Here is my test.py
from ultralytics import YOLO
# Load a model
model = YOLO('plant1v8.onnx') # pretrained YOLOv8n model
# Run batched inference on a list of images
results = model(['plant.jpg']) # return a list of Results objects
# Process results list
for result in results:
boxes = result.boxes # Boxes object for bounding box outputs
masks = result.masks # Masks object for segmentation masks outputs
keypoints = result.keypoints # Keypoints object for pose outputs
probs = result.probs # Probs object for classification outputs
#result.show() # display to screen
result.save(filename='prediction_banana_onnx.jpg') # save to disk
PS: test.py gave proper result when I use my method of onnx conversion
yolo export model=./best.pt imgsz=640,640 format=onnx opset=12
@openedev
If you want to use the rknn model with the yolov8 c++ code in this repo then you need to convert to onnx then rknn using the method I indicated. If you just need onnx then just use the ultralytics repo.
@tylertroy Yes for the above steps, I have used Yolov8 for converting pt to onnx - https://github.com/airockchip/ultralytics_yolov8/tree/main
And I did test the onnx with simple python script before proceeding to convert rknn using rknn_model_zoo. Eventually I need cpp code to test the rknn but the onnx it-self is failing with python test based on your step 1.
@openedev The reason relates to the unique post processing required for each model because of the difference in their output nodes. If you compare the graphs of the onnx models converted by each method you'll notice very different output nodes. You can visualize the graphs with the netron app. Just open each .onnx in a separate viewing tab and you'll understand what I'm referring to.
@tylertroy
Hey,
I have tried the process which you have mentioned in the following thread but model perform is not up to the mark it is detecting random pixels in image and throwing random labels. can you guess where we are going wrong.
@tylertroy Hey, I have tried the process which you have mentioned in the following thread but model perform is not up to the mark it is detecting random pixels in image and throwing random labels. can you guess where we are going wrong.
Is it detecting random pixels with your custom model or the default yolov8m model?
@tylertroy Yes it is detecting random pixels for custom dataset model developed using yolov8 model apologies for my delayed response
Hi,
I'm trying to deploy custom model. where pt gets converted to onnx and onnx gets converted to rknn. Both pt and onnx results the proper output at host. But the rknn doesn't give the expected output at rk3588 target and it shows same out image that input has.
Here are the details steps,
On rk3588 target
However the default yolov8 onnx mentioned in rknn_model_zoo is working as expected.
The only difference I can see in not working is fmt=UNDEFINED
Any help where it gets wrong?
Jagan.