lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
39.55k stars 5.42k forks source link

[Bug]: crashing when getting to step 12 #2421

Closed lgwjames closed 5 months ago

lgwjames commented 5 months ago

Checklist

What happened?

As soon as i see the model go to stop 12 the preview disappears and just has a small icon. see photo Screenshot 2024-03-03 at 07-48-43 Fooocus 2 2 0

Steps to reproduce the problem

i picked 1024 by 1024, prompt added jessicaL sitting on a chair in a bar, jessical is to prompt my lora jessicaL. added a refiner as per pic above , checked the switch is 0.4 for 1.5 sd. image starts to load with loads of noise then after step 2 starts to do something, then see a preview and then goes at 12. says in error RuntimeError: Invalid buffer size: 4.00 GB , 64gb of ram and watching loads and doesnt go above 32gb ram and 50 gb ssd space

What should have happened?

make a image

What browsers do you use to access Fooocus?

Mozilla Firefox

Where are you running Fooocus?

Locally

What operating system are you using?

mac 14.2.1

Console logs

Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.15 seconds
Image generated with private log at: /Volumes/Ai works/2024-03-03/log.html
Generating and saving time: 722.04 seconds
Total time: 749.90 seconds
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 8197624770802060239
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 12
[Fooocus] Initializing ...
[Fooocus] Loading models ...
[Fooocus] Processing prompts ...
[Wildcards] processing: __neg__
[Wildcards] (((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck))), (burry), ((burry)), cropped, deformed, dull, poor lighting, deformed iris, deformed pupils, cropped, out of frame, jpeg,artifact
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] jessicaL sitting on a chair in a bar, intricate, stunning, highly detailed, full perfect, delicate, focus, calm, refined, complex artistic color, fine detail, pretty background, inspired, vibrant colors, light, cinematic, professional, extremely nice, designed, expressive, beautiful, cute, confident, illustrious, elegant, luxury, dramatic ambient, sharp
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding negative #1 ...
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (1024, 1024)
Preparation time: 21.05 seconds
[Sampler] refiner_swap_method = vae
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [07:29<00:00, 37.47s/it]
Fooocus VAE-based swap.
  0%|                                                                                                                                                                                                                                              | 0/18 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/Users/Quad-Core/Fooocus/modules/async_worker.py", line 896, in worker
    handler(task)
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/modules/async_worker.py", line 803, in handler
    imgs = pipeline.process_diffusion(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/modules/default_pipeline.py", line 472, in process_diffusion
    sampled_latent = core.ksampler(
                     ^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/modules/core.py", line 308, in ksampler
    samples = ldm_patched.modules.sample.sample(model,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/modules/sample.py", line 100, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/modules/samplers.py", line 712, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/modules/sample_hijack.py", line 157, in sample_hacked
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback_wrap, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/modules/samplers.py", line 557, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/k_diffusion/sampling.py", line 701, in sample_dpmpp_2m_sde_gpu
    return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/k_diffusion/sampling.py", line 613, in sample_dpmpp_2m_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/modules/patch.py", line 321, in patched_KSamplerX0Inpaint_forward
    out = self.inner_model(x, sigma,
          ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/modules/samplers.py", line 271, in forward
    return self.apply_model(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/modules/samplers.py", line 268, in apply_model
    out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/modules/patch.py", line 237, in patched_sampling_function
    positive_x0, negative_x0 = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/modules/samplers.py", line 222, in calc_cond_uncond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/modules/model_base.py", line 85, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/modules/patch.py", line 404, in patched_unet_forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/ldm/modules/diffusionmodules/openaimodel.py", line 43, in forward_timestep_embed
    x = layer(x, context, transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/ldm/modules/attention.py", line 613, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/ldm/modules/attention.py", line 440, in forward
    return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/ldm/modules/diffusionmodules/util.py", line 189, in checkpoint
    return func(*inputs)
           ^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/ldm/modules/attention.py", line 500, in _forward
    n = self.attn1(n, context=context_attn1, value=value_attn1)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/ldm/modules/attention.py", line 392, in forward
    out = optimized_attention(q, k, v, self.heads)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/ldm/modules/attention.py", line 168, in attention_sub_quad
    hidden_states = efficient_dot_product_attention(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Quad-Core/Fooocus/ldm_patched/ldm/modules/sub_quadratic_attention.py", line 265, in efficient_dot_product_attention
    res = torch.cat([
                    ^
  File "/Users/Quad-Core/Fooocus/ldm_patched/ldm/modules/sub_quadratic_attention.py", line 266, in <listcomp>
    compute_query_chunk_attn(
  File "/Users/Quad-Core/Fooocus/ldm_patched/ldm/modules/sub_quadratic_attention.py", line 159, in _get_attention_scores_no_kv_chunking
    attn_scores = torch.baddbmm(
                  ^^^^^^^^^^^^^^
RuntimeError: Invalid buffer size: 4.00 GB
Total time: 470.95 seconds

Additional information

yes, updated bootcamp drivers for amd a week ago

mashb1t commented 5 months ago

@lgwjames sadly this is not the full log and some information is missing. Which GPU are you using and how much VRAM does it have? Which arguments did you use to start Fooocus?

May be related to https://github.com/invoke-ai/InvokeAI/issues/3168

lgwjames commented 5 months ago

8gb rx580 pro, to start

cd Fooocus python entry_with_update.py --theme dark --disable-offload-from-vram

mashb1t commented 5 months ago

This might have nothing to do with it, but can you pelase check without setting --disable-offload-from-vram? I sadly can't test with AMD GPUs as i don't have any test instances with AMD.

lgwjames commented 5 months ago

testing now as removed the line

lgwjames commented 5 months ago

now getting long time on waiting for task to start

mashb1t commented 5 months ago

Can't reproduce this issue. Were you able to fix it and can it be closed?

lgwjames commented 4 months ago

it randomly went away but still last 3 days from a update slow to load links to loras etc