AUTOMATIC1111 / stable-diffusion-webui-rembg

Removes backgrounds from pictures. Extension for webui.
MIT License
1.17k stars 173 forks source link

Bug since recent update #35

Open wierover opened 6 months ago

wierover commented 6 months ago

Hi,

I used this extention succesfully a couple of weeks ago. The only thing that changed on my system is that I updated the extentions within stable diffusion. No other update or change has been done since the last time I used this extension.

I found some similar people with the same issue, but nothing seemed to solve it.

Can this be fixed? Or can I download an older version of this extension somewhere.

Windows 10, AMD videocard.

See error below:

2024-02-25 19:25:19.1724639 [E:onnxruntime:Default, provider_bridge_ort.cc:1532 onnxruntime::TryGetProviderInfo_TensorRT] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "E:\user\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"

EP Error EP Error D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:456 onnxruntime::python::RegisterTensorRTPluginsAsCustomOps Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported. when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.


Error completing request Arguments: ('task(mmex2la312idy3j)', 0, <PIL.Image.Image image mode=RGB size=576x720 at 0x2D382187FA0>, None, '', '', True, 0, 1, 512, 512, True, 'None', 'None', 0, False, 1, False, 1, 0, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Deepbooru'], False, ['Horizontal'], False, 0.5, 0.2, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, 1, 0, 0, 0.5, 'CPU', False, 0, 'None', '', None, False, False, 0.5, 0, True, 'u2net_human_seg', False, False, 240, 10, 10) {} Traceback (most recent call last): File "E:\user\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "E:\user\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 469, in _create_inference_session self._register_ep_custom_ops(session_options, providers, provider_options, available_providers) File "E:\user\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 516, in _register_ep_custom_ops C.register_tensorrt_plugins_as_custom_ops(session_options, provider_options[i]) RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:456 onnxruntime::python::RegisterTensorRTPluginsAsCustomOps Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "E:\user\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "E:\user\stable-diffusion-webui\modules\call_queue.py", line 36, in f
    res = func(*args, **kwargs)
  File "E:\user\stable-diffusion-webui\modules\postprocessing.py", line 132, in run_postprocessing_webui
    return run_postprocessing(*args, **kwargs)
  File "E:\user\stable-diffusion-webui\modules\postprocessing.py", line 73, in run_postprocessing
    scripts.scripts_postproc.run(initial_pp, args)
  File "E:\user\stable-diffusion-webui\modules\scripts_postprocessing.py", line 196, in run
    script.process(single_image, **process_args)
  File "E:\user\stable-diffusion-webui\extensions\stable-diffusion-webui-rembg\scripts\postprocessing_rembg.py", line 66, in process
    session=rembg.new_session(model),
  File "E:\user\stable-diffusion-webui\venv\lib\site-packages\rembg\session_factory.py", line 26, in new_session
    return session_class(model_name, sess_opts, providers, *args, **kwargs)
  File "E:\user\stable-diffusion-webui\venv\lib\site-packages\rembg\sessions\base.py", line 31, in __init__
    self.inner_session = ort.InferenceSession(
  File "E:\user\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
    raise fallback_error from e
  File "E:\user\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "E:\user\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUDA failure 35: CUDA driver version is insufficient for CUDA runtime version ; GPU=744336320 ; hostname=DESKTOP-0G6VIHV ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=245 ; expr=cudaSetDevice(info_.device_id);