lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.86k stars 191 forks source link

[Bug]: ControlNet onnxruntime #516

Closed hellnmi closed 3 months ago

hellnmi commented 3 months ago

Checklist

What happened?

After upgrading to v1.10.1-amd-2-g395ce8dc, ControlNet ip-adapter_face_id_plus (ip-adapter-faceid-plusv2_sd15 [6e14fc1a] stopped working. There was no such problem on v1.9.3-amd-30-gee49046

Steps to reproduce the problem

  1. Start Stable Diffusion
  2. ControlNet
  3. IP-Adapter
  4. ip-adapter_face_id_plus (ip-adapter-faceid-plusv2_sd15 [6e14fc1a])
  5. Error

What should have happened?

starting image generation

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

[sysinfo-2024-08-07-14-43.json](https://github.com/user-attachments/files/16530285/sysinfo-2024-08-07-14-43.json)

Console logs

2024-08-07 17:44:02,751 - ControlNet - INFO - Preview Resolution = 512
2024-08-07 17:44:02.9889059 [E:onnxruntime:, inference_session.cc:2045 onnxruntime::InferenceSession::Initialize::<lambda_ac1b736d24ef6ddd1d25cf2738b937a9>::operator ()] Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:123 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:116 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=FRANKIE ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=182 ; expr=cudnnSetStream(cudnn_handle_, stream); 

Traceback (most recent call last):
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\controlnet_ui\controlnet_ui_group.py", line 951, in run_annotator
    result = preprocessor.cached_call(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\supported_preprocessor.py", line 198, in cached_call
    result = self._cached_call(input_image, *args, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\utils.py", line 82, in decorated_func
    return cached_func(*args, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\utils.py", line 66, in cached_func
    return func(*args, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\supported_preprocessor.py", line 211, in _cached_call
    return self(*args, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\legacy_preprocessors.py", line 105, in __call__
    result, is_image = self.call_function(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\processor.py", line 768, in face_id_plus
    face_embed, _ = g_insight_face_model.run_model(img)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\processor.py", line 696, in run_model     
    self.load_model()
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\processor.py", line 688, in load_model
    self.model = FaceAnalysis(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\insightface\app\face_analysis.py", line 31, in __init__
    model = model_zoo.get_model(onnx_file, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
    model = router.get_model(providers=providers, provider_options=provider_options)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 40, in get_model
    session = PickableInferenceSession(self.onnx_file, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
    super().__init__(model_path, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__  
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:123 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:116 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=FRANKIE ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=182 ; expr=cudnnSetStream(cudnn_handle_, stream);

Additional information

No response

CS1o commented 3 months ago

Hey Pinoki doesnt support the AMD webui with Zluda. You have to follow my install Guide from here to get everything working. Zluda is much faster than DIrectML and your GPU is supported.

AMD Guides: https://github.com/CS1o/Stable-Diffusion-Info/wiki/Installation-Guides

hellnmi commented 3 months ago

Thank you very much, kind sir.

I had previously installed via Pinokio and it worked fine. I was using version 1.9.3-amd-30-gee49046 and there were no issues. However, after upgrading to version 1.10.1-amd-2-g395ce8dc, the ControlNet IP Adapter stopped functioning.

Anyway, thank you for your help. I followed your instructions to install SD and it worked perfectly.

One more question --skip-ort is skipping the onnxruntime installation, is it not needed for this build?

CS1o commented 3 months ago

No problem, and nope onnx is not needed for the Zluda version.

hellnmi commented 3 months ago

I get it, thank you