hailo-ai / hailo_model_zoo

The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment
MIT License
331 stars 46 forks source link

yolov5 compile error #78

Open ryangsookim opened 10 months ago

ryangsookim commented 10 months ago

Hi,

I'm encountering errors when attempting to compile pretrained yolov5m_wo_spp.onnx using hailomz. Here are the details:

Command:

hailomz compile --ckpt ../models/yolov5m_wo_spp.onnx --calib-path /local/shared_with_docker/rskim_model_compiler/datasets/coco/images/train2017/ --yaml yolov5m_wo_spp.yaml

Error Messages:

[info] First time Hailo Dataflow Compiler is being used. Checking system requirements... (this might take a few seconds) [Info] No GPU connected. In file included from /local/workspace/hailo_virtualenv/lib/python3.8/site-packages/numpy/core/include/numpy/ndarraytypes.h:1948, from /local/workspace/hailo_virtualenv/lib/python3.8/site-packages/numpy/core/include/numpy/ndarrayobject.h:12, from /local/workspace/hailo_virtualenv/lib/python3.8/site-packages/numpy/core/include/numpy/arrayobject.h:5, from /home/hailo/.pyxbld/temp.linux-x86_64-cpython-38/local/workspace/hailo_model_zoo/hailo_model_zoo/core/postprocessing/cython_utils/cython_nms.c:1173: /local/workspace/hailo_virtualenv/lib/python3.8/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] 17 | #warning "Using deprecated NumPy API, disable it with " \ | ^~~ Traceback (most recent call last): File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/pyximport/_pyximport3.py", line 314, in create_module so_path = build_module(spec.name, pyxfilename=spec.origin, pyxbuild_dir=self._pyxbuild_dir, File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/pyximport/_pyximport3.py", line 197, in build_module so_path = pyxbuild.pyx_to_dll(pyxfilename, extension_mod, File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/pyximport/pyxbuild.py", line 144, in pyx_to_dll raise ImportError("reload count for %s reached maximum" % org_path) ImportError: reload count for /home/hailo/.pyxbld/lib.linux-x86_64-cpython-38/hailo_model_zoo/core/postprocessing/cython_utils/cython_nms.cpython-38-x86_64-linux-gnu.so reached maximum During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/local/workspace/hailo_virtualenv/bin/hailomz", line 33, in sys.exit(load_entry_point('hailo-model-zoo', 'console_scripts', 'hailomz')()) File "/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py", line 188, in main run(args) File "/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py", line 168, in run from hailo_model_zoo.main_driver import parse, optimize, compile, profile, evaluate File "/local/workspace/hailo_model_zoo/hailo_model_zoo/main_driver.py", line 14, in from hailo_model_zoo.core.main_utils import (compile_model, get_hef_path, get_integrated_postprocessing, File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py", line 8, in from hailo_model_zoo.core.eval import eval_factory File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/eval/eval_factory.py", line 9, in from hailo_model_zoo.core.eval.instance_segmentation_evaluation import InstanceSegmentationEval File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/eval/instance_segmentation_evaluation.py", line 8, in from hailo_model_zoo.core.eval.instance_segmentation_evaluation_utils import (YolactEval, File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/eval/instance_segmentation_evaluation_utils.py", line 4, in from hailo_model_zoo.core.postprocessing.instance_segmentation_postprocessing import _sanitize_coordinates File "/local/workspace/hailo_model_zoo/hailo_model_zoo/core/postprocessing/instance_segmentation_postprocessing.py", line 8, in from hailo_model_zoo.core.postprocessing.cython_utils.cython_nms import nms as cnms File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/pyximport/_pyximport3.py", line 332, in create_module raise exc.with_traceback(tb) File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/pyximport/_pyximport3.py", line 314, in create_module so_path = build_module(spec.name, pyxfilename=spec.origin, pyxbuild_dir=self._pyxbuild_dir, File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/pyximport/_pyximport3.py", line 197, in build_module so_path = pyxbuild.pyx_to_dll(pyxfilename, extension_mod, File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/pyximport/pyxbuild.py", line 144, in pyx_to_dll raise ImportError("reload count for %s reached maximum" % org_path) ImportError: Building module hailo_model_zoo.core.postprocessing.cython_utils.cython_nms failed: ['ImportError: reload count for /home/hailo/.pyxbld/lib.linux-x86_64-cpython-38/hailo_model_zoo/core/postprocessing/cython_utils/cython_nms.cpython-38-x86_64-linux-gnu.so reached maximum\n']

What's causing these errors, and how can I resolve them?

nina-vilela commented 10 months ago

Hi @ryangsookim,

You mentioned that the model is pre-trained. If you mean that you haven't re-trained the model, then you don't have to go through the compilation process. You can instead download an already compiled model here.

If you have re-trained the model, it is important to notice that the calibration dataset does not need to contain the entire training set. For yolov5m_wo_spp, we use a calibration dataset with size 4000. The error you are facing could be related to using too many images that are not encoded as a tfrecord. Please try again with a folder containing 4000 images and let us know how it goes.

ryangsookim commented 10 months ago

Hi @nina-vilela

I'm reaching out to you regarding a persistent error I'm encountering while training and compiling a custom YOLOv5 model. I'm currently using the HAILO-APPLICATION-CODE-EXAMPLES to test model compilation, but I've encountered an incompatibility.

Here's the issue:

The compiled HEF model's output layer is named "yolov5_nms_postprocess" with dimensions (80, 5, 80). However, the "yolo_general_inference.py" script within the HAILO-APPLICATION-CODE-EXAMPLES seems to expect a different output structure, leading to incompatibility.

I'd appreciate your assistance with the following:

Reference Python Code for Testing: Could you please provide reference Python code that demonstrates how to test a compiled model with an output layer named "nms_postprocess"?

Compilation Workflow for yolov7.hef: Could you clarify the compilation workflow used to create the "yolov7.hef" model that's compatible with "yolo_general_inference.py"? This would help me understand the expected model structure.

Thank you for your time and expertise.

nina-vilela commented 10 months ago

@ryangsookim

I would just like to confirm that your first problem is fixed. You could successfully compile yolov5m_wo_spp, right?

For your other questions, since it does not relate to the ModelZoo, we would be happy to assist you through our ticketing system instead. Please open a ticket here.

ryangsookim commented 10 months ago

@nina-vilela

My first problem was fixed.

Are there any method for creating a calib.tfrecord file from a custom YOLO dataset, specifically without utilizing an instances.json file. ??

nina-vilela commented 10 months ago

@ryangsookim

You can use this script.

We couldn't find a ticket for your issue with the yolo_general_inference example. Please let us know if you need any help with permissions for opening one.

pcycccccc commented 8 months ago

@nina-vilela Hello, May I ask if this script is directly invoked? What is the format of the dataset I need to prepare before calling it? Images and labels? What is the format of the tags? I'm a little vague about that

tmyapple commented 5 months ago

@pcycccccc you can find reference for how to use this script in the following data documentation