picobyte / stable-diffusion-webui-wd14-tagger

Labeling extension for Automatic1111's Web UI
539 stars 64 forks source link

Cuda12.1 adaptation problem #81

Closed wzgrx closed 8 months ago

wzgrx commented 8 months ago

Cuda12.1 has achieved significant performance improvement and hopes to be adaptable

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "G:\Soft\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "G:\Soft\stable-diffusion-webui\modules\call_queue.py", line 36, in f
    res = func(*args, **kwargs)
  File "G:\Soft\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\ui.py", line 113, in on_interrogate_image_submit
    interrogator.interrogate_image(image)
  File "G:\Soft\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 150, in interrogate_image
    data = ('', '', fi_key) + self.interrogate(image)
  File "G:\Soft\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 448, in interrogate
    self.load()
  File "G:\Soft\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 433, in load
    self.model = ort.InferenceSession(model_path,
  File "G:\Soft\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 430, in __init__
    raise fallback_error from e
  File "G:\Soft\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 425, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "G:\Soft\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 463, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:739 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

Traceback (most recent call last): File "G:\Soft\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "G:\Soft\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1434, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "G:\Soft\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1297, in postprocess_data self.validate_outputs(fn_index, predictions) # type: ignore File "G:\Soft\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1272, in validate_outputs raise ValueError( ValueError: An event handler (on_interrogate_image_submit) didn't receive enough output values (needed: 7, received: 3). Wanted outputs: [state, html, html, label, label, label, html] Received outputs: [None, "", "

RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:739 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

Time taken: 2.7 sec.

A: 0.49 GB, R: 0.53 GB, Sys: 1.5/6 GB (25.5%)

"]

RoelKluin commented 8 months ago

adaptable does not involve backwards compatible, it seems, but I think no one in python realm understands what that means.

munsy0227 commented 8 months ago

It seems that onnxruntime does not support cuda 12.1. So you can solve this as a temporary solution by installing ort_nightly_gpu and ort_nightly instead.

rltgjqmcpgjadyd commented 8 months ago

detail solution

  1. open extensions\stable-diffusion-webui-wd14-tagger\requirements.txt file and comment out all of onnxruntime and save the file
  2. uninstall all onnxruntime-related entries
  3. pip install ort-nightly==1.17.0.dev20231102007
  4. pip install ort-nightly-gpu
wzgrx commented 8 months ago

detail solution

  1. open extensions\stable-diffusion-webui-wd14-tagger\requirements.txt file and comment out all of onnxruntime and save the file
  2. uninstall all onnxruntime-related entries
  3. pip install ort-nightly==1.17.0.dev20231102007
  4. pip install ort-nightly-gpu

Could not find a version that satisfies the requirement ort-nightly==1.17.0.dev20231102007 (from versions: none) *** ERROR: No matching distribution found for ort-nightly==1.17.0.dev20231102007

rltgjqmcpgjadyd commented 7 months ago

See https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly