chflame163 / ComfyUI_CatVTON_Wrapper

CatVTON warpper for ComfyUI
148 stars 21 forks source link

Allocation on device 0 would exceed allowed memory #12

Open kakachiex2 opened 1 month ago

kakachiex2 commented 1 month ago

I'm getting this error wen running CatVTON, I have 6gb-VRAM but it asking for 192.00mib

Error occurred when executing CatVTONWrapper:

Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 5.21 GiB Requested : 192.00 MiB Device limit : 6.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\ComfyUI_CatVTON_Wrapper\py\cat_vton.py", line 74, in catvton result_image = pipeline( ^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\ComfyUI_CatVTON_Wrapper\py\catvton\pipeline.py", line 124, in call masked_latent = compute_vae_encodings(masked_image, self.vae) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\ComfyUI_CatVTON_Wrapper\py\catvton\utils1.py", line 110, in compute_vae_encodings model_input = vae.encode(pixel_values).latent_dist.sample() ^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper return method(self, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 264, in encode h = self.encoder(x) ^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\diffusers\models\autoencoders\vae.py", line 172, in forward sample = down_block(sample) ^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 1474, in forward hidden_states = resnet(hidden_states, temb=None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\diffusers\models\resnet.py", line 328, in forward hidden_states = self.nonlinearity(hidden_states) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\modules\activation.py", line 393, in forward return F.silu(input, inplace=self.inplace) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI_Ex\python_miniconda_env\ComfyUI\Lib\site-packages\torch\nn\functional.py", line 2075, in silu return torch._C._nn.silu(input) ^^^^^^^^^^^^^^^^^^^^^^^^

chflame163 commented 1 month ago

I have not tested it on 6G VRAM devices. In theory, it requires 8G or more.

kakachiex2 commented 1 month ago

It's working I change model to fp16 and work great not so fast, but I'm surprised its working well.

kakachiex2 commented 1 month ago

Great work on this implementation 👍

ClothingAI commented 3 weeks ago

It's working I change model to fp16 and work great not so fast, but I'm surprised its working well.

Where do you change the model? Which model did you have? How much VRAM do you have? For me it exceeded 24!

kakachiex2 commented 3 weeks ago

I have an RTX 260 6gb V-Ram

ClothingAI commented 3 weeks ago

Thanks a lot @kakachiex2

ClothingAI commented 3 weeks ago

How much time does it take for you? I am using a 24G card and it still takes a lot of time. I have a question @kakachiex2 did yhou find other better clothign swapping tools please?