Closed palant closed 6 months ago
I was too quick blaming this on seam correction. This appears to be caused by the initial image processing after all, for whatever reason. Got one more traceback, here it is happening immediately after processing starts:
File "invokeai/.venv/lib/python3.9/site-packages/invokeai/app/services/processor.py", line 90, in __process
outputs = invocation.invoke(
File "invokeai/.venv/lib/python3.9/site-packages/invokeai/app/invocations/generate.py", line 233, in invoke
generator_output = next(outputs)
File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 144, in generate
results = generator.generate(
File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 328, in generate
image = make_image(x_T, seed)
File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/generator/inpaint.py", line 292, in make_image
pipeline_output = pipeline.inpaint_from_embeddings(
File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 675, in inpaint_from_embeddings
init_image_latents = self.non_noised_latents_from_image(init_image, device=device, dtype=latents_dtype)
File "invokeai/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 734, in non_noised_latents_from_image
init_latent_dist = self.vae.encode(init_image).latent_dist
File "invokeai/.venv/lib/python3.9/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/autoencoder_kl.py", line 242, in encode
h = self.encoder(x)
File "invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/vae.py", line 139, in forward
sample = down_block(sample)
File "invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/unet_2d_blocks.py", line 1157, in forward
hidden_states = resnet(hidden_states, temb=None)
File "invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "invokeai/.venv/lib/python3.9/site-packages/diffusers/models/resnet.py", line 639, in forward
output_tensor = (input_tensor + hidden_states) / self.output_scale_factor
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 5.79 GiB total capacity; 4.30 GiB already allocated; 148.88 MiB free; 4.37 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Sorry we never followed up on this.
We have a new inpainting method in v4.0.0. If you still have this issue on v4.0.0 please create a new issue.
Is there an existing issue for this?
OS
Linux
GPU
cuda
VRAM
6GB
What version did you experience this issue on?
3.0.2
What happened?
My GTX 1660 graphics card allows producing at least 1024x768 images in txt2img mode. Inpainting on the other hand only works with 512x512 images, 512x768 already produces “CUDA out of memory” error. First pass succeeds here, second pass (seam correction) is what causes the error. It would appear that the memory reserved for the first pass isn’t freed, and my graphics card doesn’t have enough memory for a double allocation.
This error even happens when the bounding box is 512x512 but the image itself is larger. Presumably, seam correction is applied to the entire image.
An immediate work-around would be disabling seam correction, yet this doesn’t appear possible. Ideally, the double allocation would be fixed however.
Screenshots
No response
Additional context
Potentially relevant settings:
Traceback:
Contact Details
No response