XLabs-AI / x-flux-comfyui

Apache License 2.0
1.06k stars 70 forks source link

Error occurred when executing XlabsSampler: Allocation on device #36

Open linjian-ufo opened 2 months ago

linjian-ufo commented 2 months ago

Error occurred when executing XlabsSampler:

Allocation on device

Error occurred when executing XlabsSampler:

Allocation on device

File "D:\ComfyUI_windows_pytorch2.2.0_nvidia_cuda121_xformers0.0.23or_cpu\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_pytorch2.2.0_nvidia_cuda121_xformers0.0.23or_cpu\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_pytorch2.2.0_nvidia_cuda121_xformers0.0.23or_cpu\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_pytorch2.2.0_nvidia_cuda121_xformers0.0.23or_cpu\ComfyUI\custom_nodes\x-flux-comfyui\nodes.py", line 320, in sampling inmodel.diffusion_model.to(device) File "D:\ComfyUI_windows_pytorch2.2.0_nvidia_cuda121_xformers0.0.23or_cpu\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1173, in to return self._apply(convert) ^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_pytorch2.2.0_nvidia_cuda121_xformers0.0.23or_cpu\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply module._apply(fn) File "D:\ComfyUI_windows_pytorch2.2.0_nvidia_cuda121_xformers0.0.23or_cpu\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply module._apply(fn) File "D:\ComfyUI_windows_pytorch2.2.0_nvidia_cuda121_xformers0.0.23or_cpu\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply module._apply(fn) File "D:\ComfyUI_windows_pytorch2.2.0_nvidia_cuda121_xformers0.0.23or_cpu\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 804, in _apply param_applied = fn(param) ^^^^^^^^^ File "D:\ComfyUI_windows_pytorch2.2.0_nvidia_cuda121_xformers0.0.23or_cpu\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1159, in convert return t.to(

Vovanm88 commented 2 months ago

do you have gpu? did you install torch with cuda support?

tox1man commented 2 months ago

I have the same issue. RTX 4070s

b0o commented 1 month ago

Same issue on RTX 4080. Tried the low-memory mode with the gguf model but that didn't help.

b0o commented 1 month ago

Update: I was previously trying to use flux1-dev-Q8_0.gguf which was causing OOM, but I just tested flux1-dev-Q4_0.gguf and it worked.

741MiaMelano commented 2 weeks ago

I'm having the same problem with my RTX4070