comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
49.7k stars 5.23k forks source link

pytorch 2.0 #266

Closed za-wa-n-go closed 1 year ago

za-wa-n-go commented 1 year ago

I tried pytorch 2.0, and I am running out of memory when I use hires.fix-like upscaling. This does not happen unless I use 2.0. Is this a problem that can be solved?

The following errors are repeated out of memory error, emptying cache and trying again

It is possible to generate the data if I wait for a long time...

comfyanonymous commented 1 year ago

What's your OS, GPU, etc...?

za-wa-n-go commented 1 year ago

Windows11 NVIDIA GeForce RTX 3070 Ti Laptop 8G

comfyanonymous commented 1 year ago

can you give me the traceback it gives you when it runs out of memory?

are you on the standalone build, if so can you do: update\update_comfyui.bat to make sure you are on the latest version?

za-wa-n-go commented 1 year ago

I am so sorry. I have solved the problem, but I don't have a clear reason. Perhaps the GPU is being used behind the scenes and there simply wasn't enough memory.

somenewaccountthen commented 1 year ago

I am so sorry. I have solved the problem, but I don't have a clear reason. Perhaps the GPU is being used behind the scenes and there simply wasn't enough memory.

I always disable GPU-accelleration for browser. This -depending on your tabs- can save a lot of VRAM (400mb sometimes and i seen it up to 600 when i was still using it. And i rarely have many tabs open. Just something to consider

throwaway-mezzo-mix commented 1 year ago

I tried pytorch 2.0, and I am running out of memory when I use hires.fix-like upscaling.

just to report: I had a similar experience (it suddenly tried to allocate 10+ GB for an two pass upscale workflow that normally stays under 4, leading to an OOM), so i've rolled back my venv for now.

Windows 10, GTX 960 4GB

i tried to update my existing venv so that might have caused issues.

Edit: I'm gonna try the standalone build to see if it works there.

throwaway-mezzo-mix commented 1 year ago

The same thing happened on standalone, here's the traceback.

Traceback

```none Traceback (most recent call last): File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 177, in execute executed += recursive_execute(self.server, prompt, self.outputs, x, extra_data) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 56, in recursive_execute executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 56, in recursive_execute executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 65, in recursive_execute outputs[unique_id] = getattr(obj, obj.FUNCTION)(**input_data_all) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 685, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 654, in common_ksampler samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 443, in sample samples = uni_pc.sample_unipc(self.model_wrap, noise, latent_image, sigmas, sampling_function=sampling_function, max_denoise=max_denoise, extra_args=extra_args, noise_mask=denoise_mask) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 880, in sample_unipc x = uni_pc.sample(img, timesteps=timesteps, skip_type="time_uniform", method="multistep", order=order, lower_order_final=True) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 731, in sample model_prev_list = [self.model_fn(x, vec_t)] File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 422, in model_fn return self.data_prediction_fn(x, t) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 404, in data_prediction_fn noise = self.noise_prediction_fn(x, t) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 398, in noise_prediction_fn return self.model(x, t) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 330, in model_fn return noise_pred_fn(x, t_continuous) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 298, in noise_pred_fn output = sampling_function(model, x, t_input, **model_kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 195, in sampling_function cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 172, in calc_cond_uncond_batch output = model_function(input_x, timestep_, cond=c).chunk(batch_chunks) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\models\diffusion\ddpm.py", line 859, in apply_model x_recon = self.model(x_noisy, t, **cond) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\models\diffusion\ddpm.py", line 1337, in forward out = self.diffusion_model(x, t, context=cc, control=control) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 778, in forward h = module(h, emb, context) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward x = layer(x, context) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 573, in forward x = block(x, context=context[i]) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 508, in forward return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 114, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\autograd\function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 129, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 511, in _forward x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 463, in forward out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.12 GiB (GPU 0; 4.00 GiB total capacity; 1.51 GiB already allocated; 1.30 GiB free; 2.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ```

On closer inspection, it does work using xformers instead of --use-pytorch-cross-attention. Is the new Pytorch 2 cross attention only for newer cards or something?

Edit: I was being a bit dumb, I reread the text on the release page and noticed that the cross attention is only on nightly. Oops. It didn't actually matter, because the same thing happens on nightly, here's the Traceback (I'm pretty sure it's the same, though)

Traceback

``` Traceback (most recent call last): File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\execution.py", line 177, in execute executed += recursive_execute(self.server, prompt, self.outputs, x, extra_data) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\execution.py", line 56, in recursive_execute executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\execution.py", line 56, in recursive_execute executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\execution.py", line 65, in recursive_execute outputs[unique_id] = getattr(obj, obj.FUNCTION)(**input_data_all) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\nodes.py", line 685, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\nodes.py", line 654, in common_ksampler samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\samplers.py", line 443, in sample samples = uni_pc.sample_unipc(self.model_wrap, noise, latent_image, sigmas, sampling_function=sampling_function, max_denoise=max_denoise, extra_args=extra_args, noise_mask=denoise_mask) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\extra_samplers\uni_pc.py", line 880, in sample_unipc x = uni_pc.sample(img, timesteps=timesteps, skip_type="time_uniform", method="multistep", order=order, lower_order_final=True) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\extra_samplers\uni_pc.py", line 731, in sample model_prev_list = [self.model_fn(x, vec_t)] File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\extra_samplers\uni_pc.py", line 422, in model_fn return self.data_prediction_fn(x, t) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\extra_samplers\uni_pc.py", line 404, in data_prediction_fn noise = self.noise_prediction_fn(x, t) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\extra_samplers\uni_pc.py", line 398, in noise_prediction_fn return self.model(x, t) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\extra_samplers\uni_pc.py", line 330, in model_fn return noise_pred_fn(x, t_continuous) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\extra_samplers\uni_pc.py", line 298, in noise_pred_fn output = sampling_function(model, x, t_input, **model_kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\samplers.py", line 195, in sampling_function cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\samplers.py", line 172, in calc_cond_uncond_batch output = model_function(input_x, timestep_, cond=c).chunk(batch_chunks) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\ldm\models\diffusion\ddpm.py", line 859, in apply_model x_recon = self.model(x_noisy, t, **cond) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\ldm\models\diffusion\ddpm.py", line 1337, in forward out = self.diffusion_model(x, t, context=cc, control=control) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 778, in forward h = module(h, emb, context) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward x = layer(x, context) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\ldm\modules\attention.py", line 573, in forward x = block(x, context=context[i]) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\ldm\modules\attention.py", line 508, in forward return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 114, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\python_embeded\lib\site-packages\torch\autograd\function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 129, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\ldm\modules\attention.py", line 511, in _forward x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "E:\SD\ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch\ComfyUI_windows_portable_nightly_pytorch\ComfyUI\comfy\ldm\modules\attention.py", line 463, in forward out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.12 GiB (GPU 0; 4.00 GiB total capacity; 1.51 GiB already allocated; 1.30 GiB free; 2.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ```

This seems like a torch Issue (my GPU is most likely not a priority for a Nightly feature), and everything works as long as I don't use the new attention.

comfyanonymous commented 1 year ago

That's a torch issue. Their new pytorch cross attention doesn't perform as advertised which is why I ship the "stable" builds with xformers on.

za-wa-n-go commented 1 year ago

Is there any need to go to 2.0 at this time?

za-wa-n-go commented 1 year ago

I put it back in and it's comfortable... sad.