invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.34k stars 2.4k forks source link

[bug]: “CUDA out of memory” error when inpainting #4262

Closed palant closed 6 months ago

palant commented 1 year ago

Is there an existing issue for this?

OS

Linux

GPU

cuda

VRAM

6GB

What version did you experience this issue on?

3.0.2

What happened?

My GTX 1660 graphics card allows producing at least 1024x768 images in txt2img mode. Inpainting on the other hand only works with 512x512 images, 512x768 already produces “CUDA out of memory” error. First pass succeeds here, second pass (seam correction) is what causes the error. It would appear that the memory reserved for the first pass isn’t freed, and my graphics card doesn’t have enough memory for a double allocation.

This error even happens when the bounding box is 512x512 but the image itself is larger. Presumably, seam correction is applied to the entire image.

An immediate work-around would be disabling seam correction, yet this doesn’t appear possible. Ideally, the double allocation would be fixed however.

Screenshots

No response

Additional context

Potentially relevant settings:

  Features:
    esrgan: true
    internet_available: true
    log_tokenization: false
    patchmatch: true
    ignore_missing_core_models: false
  Memory/Performance:
    always_use_cpu: false
    free_gpu_mem: true
    max_cache_size: 12.0
    max_vram_cache_size: 0
    precision: auto
    sequential_guidance: false
    xformers_enabled: true
    tiled_decode: false

Traceback:

  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/app/services/processor.py", line 90, in __process
    outputs = invocation.invoke(
  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/app/invocations/generate.py", line 233, in invoke
    generator_output = next(outputs)
  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 144, in generate
    results = generator.generate(
  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 328, in generate
    image = make_image(x_T, seed)
  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/generator/inpaint.py", line 292, in make_image
    pipeline_output = pipeline.inpaint_from_embeddings(
  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 723, in inpaint_from_embeddings
    image = self.decode_latents(result_latents)
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 428, in decode_latents
    image = self.vae.decode(latents, return_dict=False)[0]
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
    return method(self, *args, **kwargs)
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/autoencoder_kl.py", line 270, in decode
    decoded = self._decode(z).sample
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/autoencoder_kl.py", line 257, in _decode
    dec = self.decoder(z)
  File "invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/vae.py", line 270, in forward
    sample = up_block(sample, latent_embeds)
  File "invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/unet_2d_blocks.py", line 2281, in forward
    hidden_states = upsampler(hidden_states)
  File "invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/resnet.py", line 169, in forward
    hidden_states = self.conv(hidden_states)
  File "invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/lora.py", line 102, in forward
    return F.conv2d(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 308.00 MiB (GPU 0; 5.79 GiB total capacity; 4.08 GiB already allocated; 300.88 MiB free; 4.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Contact Details

No response

palant commented 1 year ago

I was too quick blaming this on seam correction. This appears to be caused by the initial image processing after all, for whatever reason. Got one more traceback, here it is happening immediately after processing starts:

  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/app/services/processor.py", line 90, in __process
    outputs = invocation.invoke(
  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/app/invocations/generate.py", line 233, in invoke
    generator_output = next(outputs)
  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 144, in generate
    results = generator.generate(
  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 328, in generate
    image = make_image(x_T, seed)
  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/generator/inpaint.py", line 292, in make_image
    pipeline_output = pipeline.inpaint_from_embeddings(
  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 675, in inpaint_from_embeddings
    init_image_latents = self.non_noised_latents_from_image(init_image, device=device, dtype=latents_dtype)
  File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 734, in non_noised_latents_from_image
    init_latent_dist = self.vae.encode(init_image).latent_dist
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
    return method(self, *args, **kwargs)
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/autoencoder_kl.py", line 242, in encode
    h = self.encoder(x)
  File "invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/vae.py", line 139, in forward
    sample = down_block(sample)
  File "invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/unet_2d_blocks.py", line 1157, in forward
    hidden_states = resnet(hidden_states, temb=None)
  File "invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/resnet.py", line 639, in forward
    output_tensor = (input_tensor + hidden_states) / self.output_scale_factor
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 5.79 GiB total capacity; 4.30 GiB already allocated; 148.88 MiB free; 4.37 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
psychedelicious commented 6 months ago

Sorry we never followed up on this.

We have a new inpainting method in v4.0.0. If you still have this issue on v4.0.0 please create a new issue.