Closed pcycccccc closed 6 months ago
Hi @pcycccccc, Since the example works for two other models, then the code itself is working properly, which means one of two things (or both):
Since changing the scores & NMS threshold requires re-optimizing and re-compiling the model, I suggest that first try to check if your model includes a background class or not and adjust it\the code accordingly.
If that is not the case, try to lower the score threshold to 0.001 and keep IoU threshold to 0.7. If that doesn't work, try lowering the IoU threshold also.
Regards,
Hi @omerwer , Thank you very much for your attention. I do not quite understand the significance of the 'background' category. I have tried adding or removing it, but this does not seem to affect my results, as I still cannot obtain any bounding boxes on the images. I have tried lowering the Score threshold to 0.001 and the IoU threshold to 0.45, and experimented with different threshold parameters to convert the model , but I am still unable to display any targets.
I am not entirely sure if I need to modify other parameters related to categories during the model conversion process. I only changed the model's path and the total number of categories, using the command to convert the model: hailomz compile --ckpt /root/autodl-tmp/my_data/person_v8n.onnx --calib-path /root/autodl-tmp/my_data/person_image/ --yaml /root/autodl-tmp/hailo_mz/hailo_model_zoo/hailo_model_zoo/cfg/networks/yolov8n.yaml --classes 1.
Below is the output content during the conversion process of my model.
It is worth noting that when I perform inference on the same image using person_v8.onnx (using onnxruntime for inference), it can output results normally with a Score threshold of 0.20 and an IoU threshold of 0.7.
I still cannot pinpoint the issue. From my perspective, model conversion and inference work normally for models with 80 classes, but inference using my own model does not yield the expected results.
Recently, I have been attempting to perform image inference on Windows using models that I have converted myself. To verify the feasibility of my conversion method, I tested three models: person_v8n.onnx (from my own dataset with only a "person" category), yolov8n.onnx (converted from a pt model provided by the official ultralytics project), and yolov8n_hailo.onnx (a model provided by the official hailo_mz). Following the instructions from the model_zoo, I used the same commands to convert the onnx models into hef models. Notably, person_v8n.onnx differs from the other two models in that it contains only one category, whereas both yolov8n.onnx and yolov8n_hailo.onnx encompass 80 categories. The calibration dataset for person_v8n.onnx consists of images from specific scenes, while the datasets for yolov8n.onnx and yolov8n_hailo.onnx are from coco_val2017. All three models were successfully converted to hef models, and I have integrated the NMS into the model files using yolov8n_nms_config.json. However, during testing, I observed that while yolov8n.hef and yolov8n_hailo.hef were able to display bounding boxes on the images, person_v8n.hef seemed to detect no targets as no bounding boxes were drawn on the images (no other errors occurred during the process, and the categories and vstream_output_data were modified according to the model specifications). Could anyone advise on potential reasons for this issue?
After running the code,There are no objects on the image... so Sad!