lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
41.39k stars 5.87k forks source link

[Bug]: Computers freezes before finishing the first image #2930

Closed kalkulusrampage closed 6 months ago

kalkulusrampage commented 6 months ago

Checklist

What happened?

I have been trying fooocus for 2 days with no problem but out of the blue now the computer freezes about 20-30 secs after clicking "generate" with these glitches

PXL_20240516_163609918

Steps to reproduce the problem

  1. Click "generate" via prompt/image prompt
  2. Wait a bit

What should have happened?

Generate an image and not freezing the computer

What browsers do you use to access Fooocus?

Google Chrome, Brave

Where are you running Fooocus?

Locally

What operating system are you using?

windows 11

Console logs

C:\Users\jaime\Desktop\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.3.1
[Cleanup] Attempting to delete content of temp dir C:\Users\jaime\AppData\Local\Temp\fooocus
[Cleanup] Cleanup successful
Total VRAM 11264 MB, total RAM 32646 MB
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce GTX 1080 Ti : native
VAE dtype: torch.float32
Using xformers cross attention
Refiner unloaded.
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.
--------
model_type EPS
UNet ADM Dimension 2816
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.44 seconds
Started worker with PID 14148
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 6426769993324631790
[Fooocus] Downloading upscale models ...
[Fooocus] Downloading inpainter ...
[Inpaint] Current inpaint model is C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\inpaint\inpaint_v26.fooocus.patch
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 60 - 48
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Synthetic Refiner Activated
Synthetic Refiner Activated
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0], ('C:\\Users\\jaime\\Desktop\\Fooocus_win64_2-1-831\\Fooocus\\models\\inpaint\\inpaint_v26.fooocus.patch', 1.0)] for model [C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Loaded LoRA [C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\inpaint\inpaint_v26.fooocus.patch] for UNet [C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 960 keys at weight 1.0.
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Requested to load SDXLClipModel
Loading 1 new model
unload clone 1
[Fooocus Model Management] Moving model(s) has taken 0.52 seconds
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
[Fooocus] Image processing ...
[Fooocus] VAE Inpaint encoding ...
Requested to load AutoencoderKL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.39 seconds
[Fooocus] VAE encoding ...
Final resolution is (1331, 1331), latent is (1024, 1024).
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: torch.Size([1, 4, 128, 128])
Preparation time: 6.82 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 2.60 seconds
 80%|█████████████████████████████████████████████████████████████████▌                | 48/60 [02:03<00:31,  2.61s/it]Requested to load SDXL
Loading 1 new model
unload clone 0
 80%|█████████████████████████████████████████████████████████████████▌                | 48/60 [02:08<00:32,  2.68s/it]
Traceback (most recent call last):
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 913, in worker
    handler(task)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 816, in handler
    imgs = pipeline.process_diffusion(
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\modules\default_pipeline.py", line 362, in process_diffusion
    sampled_latent = core.ksampler(
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\modules\core.py", line 308, in ksampler
    samples = ldm_patched.modules.sample.sample(model,
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\sample.py", line 100, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 712, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\modules\sample_hijack.py", line 157, in sample_hacked
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback_wrap, noise, latent_image, denoise_mask, disable_pbar)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 557, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\ldm_patched\k_diffusion\sampling.py", line 701, in sample_dpmpp_2m_sde_gpu
    return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\ldm_patched\k_diffusion\sampling.py", line 615, in sample_dpmpp_2m_sde
    callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 552, in <lambda>
    k_callback = lambda x: callback(x["i"], x["denoised"], x["x"], total_steps)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\modules\sample_hijack.py", line 150, in callback_wrap
    refiner_switch()
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\modules\sample_hijack.py", line 140, in refiner_switch
    ldm_patched.modules.model_management.load_models_gpu(
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\modules\patch.py", line 447, in patched_load_models_gpu
    y = ldm_patched.modules.model_management.load_models_gpu_origin(*args, **kwargs)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_management.py", line 437, in load_models_gpu
    cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_management.py", line 304, in model_load
    raise e
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_management.py", line 300, in model_load
    self.real_model = self.model.patch_model(device_to=patch_model_to) #TODO: do something with loras and offloading to CPU
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_patcher.py", line 199, in patch_model
    temp_weight = ldm_patched.modules.model_management.cast_to_device(weight, device_to, torch.float32, copy=True)
  File "C:\Users\jaime\Desktop\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_management.py", line 615, in cast_to_device
    return tensor.to(device, copy=copy, non_blocking=non_blocking).to(dtype, non_blocking=non_blocking)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Total time: 138.36 seconds

Additional information

I have tried the 3 solutions of the troubleshooting I have created a new user account and installed a fresh fooocus install I have tried the downgrade of CUDA of the troubleshooting guide I tried a fresh install of comfyui trying to discard things and it gives me exactly the same problem

Computer: I7 11700kf 32 gb ram Windows 11 pro Nvidia drivers 552.44 EVGA 1080ti sc2

mashb1t commented 6 months ago

Hey @kalkulusrampage,

it is possible that your graphics card is slowly dying as illegal memory access happens on your 1080 TI. Also see https://github.com/lllyasviel/Fooocus/issues/530 (https://github.com/lllyasviel/Fooocus/issues/530#issuecomment-1777986103)

You may also try to use CUDA 11 and xformers, please check out https://github.com/lllyasviel/Fooocus/issues/1412 (https://github.com/lllyasviel/Fooocus/issues/1412#issuecomment-1856743382).

As this is not an issue of Fooocus but rather of your hardware and/or incompatibility, i'd propose to close this issue as duplicate and won't fix / can't fix.

kalkulusrampage commented 6 months ago

I tried 3d mark benchmark to stress the GPU and it breaks the computer too.

Thanks for your help