Error occurred when executing InstantIDFaceAnalysis: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported. #205
Hello, I have installed onnxruntime-gpu==1.18.1, but why does it still prompt CUDA error, while it can run normally with CUP? Below is my error message, can you help me tell me how to run it or which command to install? Thank you very much!
Error occurred when executing InstantIDFaceAnalysis:
D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(slice_dict(input_data_all, i)))
File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_InstantID\InstantID.py", line 217, in load_insight_face
model = FaceAnalysis(name="antelopev2", root=INSIGHTFACE_DIR, providers=[provider + 'ExecutionProvider',]) # alternative to buffalo_l
File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\insightface\app\face_analysis.py", line 31, in init
model = model_zoo.get_model(onnx_file, kwargs)
File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
model = router.get_model(providers=providers, provider_options=provider_options)
File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\insightface\model_zoo\model_zoo.py", line 40, in get_model
session = PickableInferenceSession(self.onnx_file, kwargs)
File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in init
super().init(model_path, kwargs)
File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in init
raise fallback_error from e
File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in init
self._create_inference_session(self._fallback_providers, None)
File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
Hello, I have installed onnxruntime-gpu==1.18.1, but why does it still prompt CUDA error, while it can run normally with CUP? Below is my error message, can you help me tell me how to run it or which command to install? Thank you very much!
Error occurred when executing InstantIDFaceAnalysis:
D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_InstantID\InstantID.py", line 217, in load_insight_face model = FaceAnalysis(name="antelopev2", root=INSIGHTFACE_DIR, providers=[provider + 'ExecutionProvider',]) # alternative to buffalo_l File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\insightface\app\face_analysis.py", line 31, in init model = model_zoo.get_model(onnx_file, kwargs) File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model model = router.get_model(providers=providers, provider_options=provider_options) File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\insightface\model_zoo\model_zoo.py", line 40, in get_model session = PickableInferenceSession(self.onnx_file, kwargs) File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in init super().init(model_path, kwargs) File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in init raise fallback_error from e File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in init self._create_inference_session(self._fallback_providers, None) File "G:\ComfyUI\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers)