nullquant / ComfyUI-BrushNet

ComfyUI BrushNet nodes
Apache License 2.0
384 stars 14 forks source link

Error occurred when executing BrushNetLoader #108

Closed rezponze closed 3 weeks ago

rezponze commented 1 month ago

Hi!

The node installed without problems and the models are places as your instructions. But when loading and running the BrushNet_SDXL_basic.json workflow gives me this error. Any tips? I'm running on a 3060 with 12gb VRAM.

Error occurred when executing BrushNetLoader:

CUDA error: operation not supported CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

File "F:\StabilityMatrix\Data\Packages\ComfyUI-dev\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\StabilityMatrix\Data\Packages\ComfyUI-dev\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\StabilityMatrix\Data\Packages\ComfyUI-dev\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\StabilityMatrix\Data\Packages\ComfyUI-dev\custom_nodes\ComfyUI-BrushNet\brushnet_nodes.py", line 105, in brushnet_loading brushnet_model = load_checkpoint_and_dispatch( File "F:\StabilityMatrix\Data\Packages\ComfyUI-dev\venv\lib\site-packages\accelerate\big_modeling.py", line 598, in load_checkpoint_and_dispatch device_map = infer_auto_device_map( File "F:\StabilityMatrix\Data\Packages\ComfyUI-dev\venv\lib\site-packages\accelerate\utils\modeling.py", line 1121, in infer_auto_device_map max_memory = get_max_memory(max_memory) File "F:\StabilityMatrix\Data\Packages\ComfyUI-dev\venv\lib\site-packages\accelerate\utils\modeling.py", line 825, in get_maxmemory = torch.tensor([0], device=i)

nullquant commented 1 month ago

Try to pass the --disable-cuda-malloc parameter to comfy when you start it. If the error persists run post full ComfyUI log. Also execute nvidia-smi and post a result, please.

rezponze commented 1 month ago

--disable-cuda-malloc worked. Thanks!

HSJDZNM commented 3 weeks ago

I added "--disable-cuda-malloc" and it still didn't work, but I was able to use it yesterday and not this morning. Below isa screenshot of my console and interface log 屏幕截图 2024-06-07 091731

HSJDZNM commented 3 weeks ago

My graphics card has 80GB of VRAM, so I don't think it's an issue with my graphics card. I'm running it on GPU 0 屏幕截图 2024-06-07 092524

HSJDZNM commented 3 weeks ago

No worries, a restart can solve my weird problem.