Closed hellnmi closed 3 months ago
when I use ControlNet with an IP Adapter, I get an error
starting image generation
webui: version: [v1.10.1-amd-2-g395ce8dc] controlnet: ControlNet v1.1.455
Mozilla Firefox
set COMMANDLINE_ARGS=--no-download-sd-model --no-half-vae --api --use-zluda
sd-webui-controlnet
2024-08-06 21:44:05,852 - ControlNet - INFO - Preview Resolution = 512 2024-08-06 21:44:06.1261400 [E:onnxruntime:, inference_session.cc:2045 onnxruntime::InferenceSession::Initialize::<lambda_ac1b736d24ef6ddd1d25cf2738b937a9>::operator ()] Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:123 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:116 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=FRANKIE ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=182 ; expr=cudnnSetStream(cudnn_handle_, stream); Traceback (most recent call last): File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run result = context.run(func, *args) File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, **kwargs) File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\controlnet_ui\controlnet_ui_group.py", line 951, in run_annotator result = preprocessor.cached_call( File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\supported_preprocessor.py", line 198, in cached_call result = self._cached_call(input_image, *args, **kwargs) File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\utils.py", line 82, in decorated_func return cached_func(*args, **kwargs) File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\utils.py", line 66, in cached_func return func(*args, **kwargs) File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\supported_preprocessor.py", line 211, in _cached_call return self(*args, **kwargs) File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\legacy_preprocessors.py", line 105, in __call__ result, is_image = self.call_function( File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\processor.py", line 768, in face_id_plus face_embed, _ = g_insight_face_model.run_model(img) File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\processor.py", line 696, in run_model self.load_model() File "C:\Users\helln\pinokio\api\automatic1111.git\app\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\processor.py", line 688, in load_model self.model = FaceAnalysis( File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\insightface\app\face_analysis.py", line 31, in __init__ model = model_zoo.get_model(onnx_file, **kwargs) File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model model = router.get_model(providers=providers, provider_options=provider_options) File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 40, in get_model session = PickableInferenceSession(self.onnx_file, **kwargs) File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__ super().__init__(model_path, **kwargs) File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\Users\helln\pinokio\api\automatic1111.git\app\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:123 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:116 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=FR ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=182 ; expr=cudnnSetStream(cudnn_handle_, stream);
No response
我遇到类似的情况,我卸载onnxruntime-gpu,安装onnxruntime,然后就正常运行了 pip uninstall onnxruntime-gpu pip install onnxruntime
Thanks
Is there an existing issue for this?
What happened?
when I use ControlNet with an IP Adapter, I get an error
Steps to reproduce the problem
What should have happened?
starting image generation
Commit where the problem happens
webui: version: [v1.10.1-amd-2-g395ce8dc] controlnet: ControlNet v1.1.455
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
List of enabled extensions
sd-webui-controlnet
Console logs
Additional information
No response