Closed raochinmay6 closed 7 months ago
Looks like you are running out of memory, make sure your system meets the minimum requirements and post the full console log, in text form, from start to finish.
(And make sure that you have at least 40GB free space on each drive if you still see "RuntimeError: CPUAllocator" )
see this....
C:\Fooocus\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py Already up-to-date Update succeeded. [System ARGV] ['Fooocus\entry_with_update.py'] Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Fooocus version: 2.1.865 Running on local URL: http://127.0.0.1:7865
To create a public link, set share=True
in launch()
.
Total VRAM 4096 MB, total RAM 7522 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
xformers version: 0.0.20
Set vram state to: LOW_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce GTX 1650 : native
VAE dtype: torch.float32
Using xformers cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 2008921839013314198
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] girl with black hair, glowing, magical, stunning, highly detailed, formal, serious, determined, lucid, pretty, attractive, beautiful, dramatic, intricate, elegant, colorful, extremely light, shining, sharp focus, epic ambient color, perfect composition, creative, cinematic, fine detail, full, amazing, very inspirational, thought, professional, cool, awesome, fabulous
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] girl with black hair, sharp focus, intricate, cinematic light, clear, crisp, detailed, beautiful, confident, complex, highly color, directed, ambient, rich deep colors, dynamic background, elegant, romantic, glowing, symmetry, stunning, inspired, noble, illuminated, pretty, friendly, enhanced, loving, generous, dramatic, glorious, awarded, perfect, cool
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (896, 1152)
Preparation time: 16.22 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
ERROR diffusion_model.output_blocks.2.0.in_layers.2.weight [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 88473600 bytes.
ERROR diffusion_model.output_blocks.2.0.out_layers.3.weight [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 58982400 bytes.
ERROR diffusion_model.output_blocks.2.1.transformer_blocks.0.attn1.to_v.weight [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 6553600 bytes.
ERROR diffusion_model.output_blocks.2.1.transformer_blocks.0.attn1.to_out.0.weight [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 6553600 bytes.
Traceback (most recent call last):
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 822, in worker
handler(task)
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, kwargs)
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, *kwargs)
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 753, in handler
imgs = pipeline.process_diffusion(
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, kwargs)
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\modules\default_pipeline.py", line 361, in process_diffusion
sampled_latent = core.ksampler(
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, *kwargs)
File "C:\Fooocus\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\modules\core.py", line 313, in ksampler
samples = ldm_patched.modules.sample.sample(model,
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\sample.py", line 93, in sample
real_model, positive_copy, negative_copy, noise_mask, models = prepare_sampling(model, noise.shape, positive, negative, noise_mask)
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\sample.py", line 86, in prepare_sampling
ldm_patched.modules.model_management.load_models_gpu([model] + models, model.memory_required([noise_shape[0] 2] + list(noise_shape[1:])) + inference_memory)
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\modules\patch.py", line 441, in patched_load_models_gpu
y = ldm_patched.modules.model_management.load_models_gpu_origin(args, **kwargs)
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_management.py", line 434, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_management.py", line 301, in model_load
raise e
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_management.py", line 297, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to) #TODO: do something with loras and offloading to CPU
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_patcher.py", line 201, in patch_model
temp_weight = weight.to(torch.float32, copy=True)
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 52428800 bytes.
Total time: 57.44 seconds
Looks like you are running out of memory, make sure your system meets the minimum requirements
Your system has 7522 MB of memory, minimal requirement is 8 GB. This is why you're not able to run Fooocus on your PC. I'm sorry.
if i increase ram should it work ?
@raochinmay6 yes
can u tell me about this.. C:\Fooocus\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py Already up-to-date Update succeeded. [System ARGV] ['Fooocus\entry_with_update.py'] Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Fooocus version: 2.1.865 Running on local URL: http://127.0.0.1:7865
To create a public link, set share=True
in launch()
.
Total VRAM 4096 MB, total RAM 7522 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
xformers version: 0.0.20
Set vram state to: LOW_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce GTX 1650 : native
VAE dtype: torch.float32
Using xformers cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
File "threading.py", line 1016, in _bootstrap_inner
File "threading.py", line 953, in run
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 25, in worker
import modules.default_pipeline as pipeline
File "C:\Fooocus\Fooocus_win64_2-1-831\Fooocus\modules\default_pipeline.py", line 253, in
@raochinmay6 your computer does not have enough system RAM to run fooocus as mentioned previously. I have a 4GB nvidia and it works but i also have 32GM of system RAM, you need at least 8GB which your log shows you do not:
Total VRAM 4096 MB, total RAM 7522 MB
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 26214400 bytes.
This occurs when you run out of memory and Fooocus tries to allocate more in order to function correctly. The solution is to get more RAM and install it in your computer as you're not fulfilling the minimal system requirements as mentioned in https://github.com/lllyasviel/Fooocus/issues/2342#issuecomment-1961791052
Ok understood bro thanks for replying but if I increase my ram so it will be solve right? I mean I have GTX 1650 4gb vram and 8 GB Ram so I increase and make it total 16 GB , so it will be enough?
@raochinmay6 yes
^
Read Troubleshoot
[x] I confirm that I have read the Troubleshoot guide before making this issue.
Describe the problem A clear and concise description of what the bug is.
Full Console Log Paste the full console log here. You will make our job easier if you give a full log.