thawro / yolov8-digits-detection

Digits detection with YOLOv8 detection model and ONNX pre/post processing
https://thawro.github.io/web-object-detector/
12 stars 4 forks source link

omnx model cannot be loaded #5

Open jackysywk opened 1 month ago

jackysywk commented 1 month ago

I just git pull the repo and docker build myself in MacOS

  File "/opt/homebrew/Caskroom/miniforge/base/envs/yolo8/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/Users/user/Desktop/yolov8-digits-detection/src/gradio_app.py", line 11, in <module>
    object_detector = OnnxObjectDetector()
  File "/Users/user/Desktop/yolov8-digits-detection/src/object_detection/with_onnx.py", line 137, in __init__
    preprocessing=OnnxPreprocessing(preprocessing_path),
  File "/Users/user/Desktop/yolov8-digits-detection/src/object_detection/with_onnx.py", line 47, in __init__
    super().__init__(path, providers=["CPUExecutionProvider"])
  File "/Users/user/Desktop/yolov8-digits-detection/src/object_detection/with_onnx.py", line 26, in __init__
    self.session = ort.InferenceSession(path, providers=providers)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/yolo8/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/yolo8/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /Users/user/Desktop/yolov8-digits-detection/models/preprocessing.onnx failed:Protobuf parsing failed.
Dordor333 commented 2 weeks ago

I just git pull the repo and docker build myself in MacOS

  File "/opt/homebrew/Caskroom/miniforge/base/envs/yolo8/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/Users/user/Desktop/yolov8-digits-detection/src/gradio_app.py", line 11, in <module>
    object_detector = OnnxObjectDetector()
  File "/Users/user/Desktop/yolov8-digits-detection/src/object_detection/with_onnx.py", line 137, in __init__
    preprocessing=OnnxPreprocessing(preprocessing_path),
  File "/Users/user/Desktop/yolov8-digits-detection/src/object_detection/with_onnx.py", line 47, in __init__
    super().__init__(path, providers=["CPUExecutionProvider"])
  File "/Users/user/Desktop/yolov8-digits-detection/src/object_detection/with_onnx.py", line 26, in __init__
    self.session = ort.InferenceSession(path, providers=providers)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/yolo8/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/yolo8/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /Users/user/Desktop/yolov8-digits-detection/models/preprocessing.onnx failed:Protobuf parsing failed.

Did you figure out what is wrong with the the onnx load?

thawro commented 2 weeks ago

The ONNXRuntime version from requirements.txt is 1.15.0. I am not sure if this version supports MacOS. Try newer version or check: onnxruntime-silicon

Dordor333 commented 2 weeks ago

The ONNXRuntime version from requirements.txt is 1.15.0. I am not sure if this version supports MacOS. Try newer version or check: onnxruntime-silicon

I am running this on linux not mac and I tried different versions still doesn't work If it matters I run it in sagemaker amazon but I don't think it does. Can you maybe release the torch weights as torchscript or the weights to load to ultralytics?

thawro commented 2 weeks ago

But error mentioned earlier indicates preprocessing.onnx file which is for preprocessing only (letter-box + channel permute), so there are no weights for it

jstat17 commented 5 days ago

The model files are stored using git LFS. You need to checkout the files using git lfs in the root directory of the repo. If you open your .onnx model files in a text editor and they give hash values then you need to do this. The yolov8 model itself is ~12MB but the unchecked-out version is a few kilobytes.