lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.87k stars 191 forks source link

[Bug]: Since installing torch2.3 instead of torch2.2, the program often errors((zluda)) #463

Closed pinea00 closed 5 months ago

pinea00 commented 6 months ago

Checklist

What happened?

Please set torch2.2 as the default value of zluda. Many extensions do not consider torch2.3, and only 2.2 can be used to share venv with forge or comfyui.

Steps to reproduce the problem

1.install extension https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg.git 2.run it

What should have happened?

cuda error not working

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

sysinfo-2024-05-13-11-58.json

Console logs

EP Error D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "S:\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
2024-05-13 19:44:29.2892321 [E:onnxruntime:, inference_session.cc:1799 onnxruntime::InferenceSession::Initialize::<lambda_197d3b7975b9bacd9690b0adb4064ca2>::operator ()] Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=BILL-GATZ ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=173 ; expr=cudnnSetStream(cudnn_handle_, stream);

*** Error completing request
*** Arguments: ('task(zvz3lum7t97w1uf)', 0.0, <PIL.Image.Image image mode=RGBA size=768x768 at 0x1ECC2CF7010>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, False, 1, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru'], True, 'silueta', False, False, 240, 10, 10) {}
    Traceback (most recent call last):
      File "S:\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
        self._create_inference_session(providers, provider_options, disabled_optimizers)
      File "S:\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 463, in _create_inference_session
        sess.initialize_session(providers, provider_options, disabled_optimizers)
    RuntimeError: D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "S:\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "S:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "S:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "S:\stable-diffusion-webui\modules\postprocessing.py", line 131, in run_postprocessing_webui
        return run_postprocessing(*args, **kwargs)
      File "S:\stable-diffusion-webui\modules\postprocessing.py", line 71, in run_postprocessing
        scripts.scripts_postproc.run(initial_pp, args)
      File "S:\stable-diffusion-webui\modules\scripts_postprocessing.py", line 198, in run
        script.process(single_image, **process_args)
      File "S:\stable-diffusion-webui\extensions\stable-diffusion-webui-rembg\scripts\postprocessing_rembg.py", line 66, in process
        session=rembg.new_session(model),
      File "S:\stable-diffusion-webui\venv\lib\site-packages\rembg\session_factory.py", line 26, in new_session
        return session_class(model_name, sess_opts, providers, *args, **kwargs)
      File "S:\stable-diffusion-webui\venv\lib\site-packages\rembg\sessions\base.py", line 31, in __init__
        self.inner_session = ort.InferenceSession(
      File "S:\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 430, in __init__
        raise fallback_error from e
      File "S:\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 425, in __init__
        self._create_inference_session(self._fallback_providers, None)
      File "S:\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 463, in _create_inference_session
        sess.initialize_session(providers, provider_options, disabled_optimizers)
    onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=BILL-GATZ ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=173 ; expr=cudnnSetStream(cudnn_handle_, stream);

Additional information

When I reinstall torch2.2, it can work normally, even sharing venv with forge and comfyui

lshqqytiger commented 6 months ago

Specify --override-torch=2.2.2.

pinea00 commented 5 months ago

Sorry, I later reinstalled VENV torch2.3+cu118 and it worked completely normally. It was not caused by torch2.3.