Problem:
I encountered an error while attempting to use aLora model within the ComfyUI framework hosted on Google Colab. The error message indicates a memory allocation issue when executing the KSampler component.
Error Message:
Error occurred when executing KSampler:
Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 14.34 GiB
Requested : 50.00 MiB
Device limit : 14.75 GiB
Free (according to CUDA): 4.81 MiB
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB
File "/content/ComfyUI/execution.py", line 151, in recursive_execute...
...temp_weight = weight.float().to(device_to, copy=True)
Queue size: 0
Context:
I am using the ComfyUI framework on Google Colab to work with a LoRa (Long Range) model. The specific version of Colab being used is sdxl_v1.0_controlnet_comfyui_colab.ipynb. The size of the LoRa model is approximately 17MB.
Details:
The error occurs during the execution of the KSampler component, specifically during the allocation of memory on device 0. The current memory allocation is 14.34 GiB, while the requested memory for the operation is 50.00 MiB. The device limit is 14.75 GiB, with only 4.81 MiB of free memory according to CUDA. The PyTorch limit, set by the user-supplied memory fraction, is also noted as 17179869184.00 GiB.
The error traceback includes the following relevant files and lines:
execution.py, line 151: recursive_execute
execution.py, line 81: get_output_data
execution.py, line 74: map_node_over_list
nodes.py, line 1206: sample
nodes.py, line 1176: common_ksampler
sample.py, line 81: sample
model_management.py, line 368: load_models_gpu
model_management.py, line 259: model_load
model_management.py, line 255: model_load
sd.py, line 396: patch_model
Expected Outcome:
I would like assistance in resolving this memory allocation issue so that I can successfully use the LoRa model within the ComfyUI framework on Google Colab. Any guidance or suggestions on adjusting memory settings, optimizing memory usage, or identifying potential workarounds would be greatly appreciated. THe instance say that the GPU have 15Gb of ram
Description:
Problem: I encountered an error while attempting to use aLora model within the ComfyUI framework hosted on Google Colab. The error message indicates a memory allocation issue when executing the KSampler component.
Error Message:
Context: I am using the ComfyUI framework on Google Colab to work with a LoRa (Long Range) model. The specific version of Colab being used is
sdxl_v1.0_controlnet_comfyui_colab.ipynb
. The size of the LoRa model is approximately 17MB.Details: The error occurs during the execution of the
KSampler
component, specifically during the allocation of memory on device 0. The current memory allocation is 14.34 GiB, while the requested memory for the operation is 50.00 MiB. The device limit is 14.75 GiB, with only 4.81 MiB of free memory according to CUDA. The PyTorch limit, set by the user-supplied memory fraction, is also noted as 17179869184.00 GiB.The error traceback includes the following relevant files and lines:
execution.py
, line 151:recursive_execute
execution.py
, line 81:get_output_data
execution.py
, line 74:map_node_over_list
nodes.py
, line 1206:sample
nodes.py
, line 1176:common_ksampler
sample.py
, line 81:sample
model_management.py
, line 368:load_models_gpu
model_management.py
, line 259:model_load
model_management.py
, line 255:model_load
sd.py
, line 396:patch_model
Expected Outcome: I would like assistance in resolving this memory allocation issue so that I can successfully use the LoRa model within the ComfyUI framework on Google Colab. Any guidance or suggestions on adjusting memory settings, optimizing memory usage, or identifying potential workarounds would be greatly appreciated. THe instance say that the GPU have 15Gb of ram