Open DreamLoveBetty opened 3 weeks ago
I have changed the default dependency package to onnxruntime-gpu
. If your conda environment already has a usable onnxruntime-gpu
installed, it will not be overwritten (but you will need to manually uninstall onnxruntime
).
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "D:\Program\comfy_torch2.4\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program\comfy_torch2.4\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program\comfy_torch2.4\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Program\comfy_torch2.4\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program\comfy_torch2.4\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_HelloMeme\meme.py", line 155, in load_face_toolkits face_aligner = HelloCameraDemo(face_alignment_module=HelloFaceAlignment(gpu_id=gpu_id), reset=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program\comfy_torch2.4\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_HelloMeme\hellomeme\tools\hello_face_alignment.py", line 20, in init create_onnx_session(hf_hub_download('songkey/hello_group_facemodel', filename='hello_face_landmark.onnx'), gpu_id=gpu_id)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program\comfy_torch2.4\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_HelloMeme\hellomeme\tools\utils.py", line 30, in create_onnx_session sess = onnxruntime.InferenceSession(onnx_path, providers=providers) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Program\comfy_torch2.4\ComfyUI_windows_portable\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in init raise fallback_error from e File "D:\Program\comfy_torch2.4\ComfyUI_windows_portable\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in init self._create_inference_session(self._fallback_providers, None) File "D:\Program\comfy_torch2.4\ComfyUI_windows_portable\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.