BreezeWhite / oemer

End-to-end Optical Music Recognition (OMR) system. Transcribe phone-taken music sheet image into MusicXML, which can be edited and converted to MIDI.
https://breezewhite.github.io/oemer/
MIT License
394 stars 46 forks source link

Parse a basic file fails due to CUDA #37

Closed KB3HNS closed 1 year ago

KB3HNS commented 1 year ago

Describe the bug Trying to try this program out as I do a lot of music transcribing. I did a fresh install and attempting to run the program I get several crashes that appear to be originating from onnxruntime.

Input Image Issue exists independent of any image. I cannot provide the image due to copyright restrictions. I have even tried the one linked in the README as well as a very simple one attached. Capture

Full Traceback

The complete error traceback.

2023-09-19 22:24:53 Extracting staffline and symbols
d:\devel\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CoreMLExecutionProvider' is not in available provider names.Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
  warnings.warn(
2023-09-19 22:24:53.1337982 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "d:\devel\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:739 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
 when using ['CoreMLExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
2023-09-19 22:24:53.1975547 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "d:\devel\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

Traceback (most recent call last):
  File "d:\devel\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "d:\devel\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 471, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:739 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\Python39\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "D:\Python39\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "D:\devel\venv\Scripts\oemer.exe\__main__.py", line 7, in <module>
  File "d:\devel\venv\lib\site-packages\oemer\ete.py", line 276, in main
    mxl_path = extract(args)
  File "d:\devel\venv\lib\site-packages\oemer\ete.py", line 127, in extract
    staff, symbols, stems_rests, notehead, clefs_keys = generate_pred(str(img_path), use_tf=args.use_tf)
  File "d:\devel\venv\lib\site-packages\oemer\ete.py", line 47, in generate_pred
    staff_symbols_map, _ = inference(
  File "d:\devel\venv\lib\site-packages\oemer\inference.py", line 43, in inference
    sess = rt.InferenceSession(onnx_path, providers=providers)
  File "d:\devel\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 430, in __init__
    raise fallback_error from e
  File "d:\devel\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 425, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "d:\devel\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 471, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:739 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

Command You Execute oemer --without-deskew Capture.PNG

Other Relevant Information: (venv) D:\devel>python --version Python 3.9.6

(venv) D:\devel>echo %CUDA_PATH% C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4

Windows 10 x64

GPU: RTX3060

BreezeWhite commented 1 year ago

As the error says, seems like there is something wrong with your CUDA setup. Most likely the version of onnxruntime, CUDA, cuDNN, and your GPU card does not match each other. See the requirements of onnxruntime.

BreezeWhite commented 1 year ago

Closing this since the issue isn't related to oemer itself.