Mikubill / sd-webui-controlnet

WebUI extension for ControlNet
GNU General Public License v3.0
17.04k stars 1.96k forks source link

[Bug]: ControlNet onnxruntime #3029

Closed hellnmi closed 3 months ago

hellnmi commented 3 months ago

Is there an existing issue for this?

What happened?

when I use ControlNet with an IP Adapter, I get an error

Steps to reproduce the problem

  1. Start Stable Diffusion
  2. ControlNet
  3. IP-Adapter
  4. ip-adapter_face_id_plus (ip-adapter-faceid-plusv2_sd15 [6e14fc1a])
  5. Error

What should have happened?

starting image generation

Commit where the problem happens

webui: version: [v1.10.1-amd-2-g395ce8dc] controlnet: ControlNet v1.1.455

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

set COMMANDLINE_ARGS=--no-download-sd-model --no-half-vae --api --use-zluda

List of enabled extensions

sd-webui-controlnet

Console logs

2024-08-06 21:44:05,852 - ControlNet - INFO - Preview Resolution = 512
2024-08-06 21:44:06.1261400 [E:onnxruntime:, inference_session.cc:2045 onnxruntime::InferenceSession::Initialize::<lambda_ac1b736d24ef6ddd1d25cf2738b937a9>::operator ()] Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:123 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:116 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=FRANKIE ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=182 ; expr=cudnnSetStream(cudnn_handle_, stream); 

Traceback (most recent call last):
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\controlnet_ui\controlnet_ui_group.py", line 951, in run_annotator
    result = preprocessor.cached_call(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\supported_preprocessor.py", line 198, in cached_call
    result = self._cached_call(input_image, *args, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\utils.py", line 82, in decorated_func
    return cached_func(*args, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\utils.py", line 66, in cached_func
    return func(*args, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\supported_preprocessor.py", line 211, in _cached_call
    return self(*args, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\legacy_preprocessors.py", line 105, in __call__
    result, is_image = self.call_function(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\processor.py", line 768, in face_id_plus  
    face_embed, _ = g_insight_face_model.run_model(img)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\processor.py", line 696, in run_model     
    self.load_model()
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\processor.py", line 688, in load_model    
    self.model = FaceAnalysis(
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\insightface\app\face_analysis.py", line 31, in __init__
    model = model_zoo.get_model(onnx_file, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
    model = router.get_model(providers=providers, provider_options=provider_options)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 40, in get_model
    session = PickableInferenceSession(self.onnx_file, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
    super().__init__(model_path, **kwargs)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__  
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:123 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:116 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=FR ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=182 ; expr=cudnnSetStream(cudnn_handle_, stream);

Additional information

No response

picv888 commented 3 months ago

我遇到类似的情况,我卸载onnxruntime-gpu,安装onnxruntime,然后就正常运行了 pip uninstall onnxruntime-gpu pip install onnxruntime

hellnmi commented 3 months ago

Thanks