XLabs-AI / x-flux-comfyui

Apache License 2.0
836 stars 58 forks source link

torch.cuda.OutOfMemoryError #18

Open dinusha94 opened 1 month ago

dinusha94 commented 1 month ago

are there any CUDA memory requirements for the Xlabs sampler, I am getting the following error while running the sampler. but on my machine, I was able to run the flux text-to-image workflows successfully.

I am using a Tesla T4 16GB GPU

Error log

Exception in thread Thread-4 (prompt_worker):
Traceback (most recent call last):
  File "/home/usr_9110799_ulta_com/ComfyUI/execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "/home/usr_9110799_ulta_com/ComfyUI/execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "/home/usr_9110799_ulta_com/ComfyUI/execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "/home/usr_9110799_ulta_com/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 310, in sampling
    inmodel.diffusion_model.to(device)
  File "/home/usr_9110799_ulta_com/.pyenv/versions/3.10.6/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1160, in to
    return self._apply(convert)
  File "/home/usr_9110799_ulta_com/.pyenv/versions/3.10.6/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  File "/home/usr_9110799_ulta_com/.pyenv/versions/3.10.6/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  File "/home/usr_9110799_ulta_com/.pyenv/versions/3.10.6/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "/home/usr_9110799_ulta_com/.pyenv/versions/3.10.6/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 833, in _apply
    param_applied = fn(param)
  File "/home/usr_9110799_ulta_com/.pyenv/versions/3.10.6/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1158, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated     : 13.87 GiB
Requested               : 27.00 MiB
Device limit            : 14.76 GiB
Free (according to CUDA): 32.75 MiB
PyTorch limit (set by user-supplied memory fraction) 
Vovanm88 commented 1 month ago

wait for offloading support

Willian7004 commented 3 weeks ago

I got a similar error and there were still more shared video memory. It only takes 7Gib of video memory to run other lora.