Closed IPv6 closed 5 months ago
我也是这个问题, system: window 10 wsl ubuntu (venv) ai@DESKTOP-MSRIK3S:~/dev/Fooocus-2.4.0-rc1$ python entry_with_update.py --listen 0.0.0.0 Update failed. Repository not found at /home/ai/dev/Fooocus-2.4.0-rc1 Update succeeded. [System ARGV] ['entry_with_update.py', '--listen', '0.0.0.0'] Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Fooocus version: 2.4.0-rc1 [Cleanup] Attempting to delete content of temp dir /tmp/fooocus [Cleanup] Cleanup successful Total VRAM 8192 MB, total RAM 15916 MB Set vram state to: NORMAL_VRAM Always offload VRAM Device: cuda:0 NVIDIA GeForce RTX 3070 : native VAE dtype: torch.bfloat16 Using pytorch cross attention Refiner unloaded. Running on local URL: http://0.0.0.0:7865
share=True
in launch()
.
IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'} Base model loaded: /home/ai/dev/Fooocus-2.4.0-rc1/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors VAE loaded: None Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.1), ('None', 1.0), ('None', 1.0), ('None', 1.0), ('None', 1.0)] for model [/home/ai/dev/Fooocus-2.4.0-rc1/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors]. Loaded LoRA [/home/ai/dev/Fooocus-2.4.0-rc1/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/home/ai/dev/Fooocus-2.4.0-rc1/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cuda:0, use_fp16 = True. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 0.61 seconds Started worker with PID 1409476 App started successful. Use the app with http://localhost:7865/ or 0.0.0.0:7865 Traceback (most recent call last): File "/home/ai/dev/Fooocus-2.4.0-rc1/modules/async_worker.py", line 977, in worker handler(task) File "/home/ai/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "/home/ai/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "/home/ai/dev/Fooocus-2.4.0-rc1/modules/async_worker.py", line 297, in handler elif performance_selection == Performance.HYPER_SD8: File "/usr/lib/python3.10/enum.py", line 437, in getattr raise AttributeError(name) from None AttributeError: HYPER_SD8. Did you mean: 'HYPER_SD'? Total time: 0.02 seconds
# elif performance_selection == Performance.HYPER_SD:
# print('Enter Hyper-SD mode.')
# progressbar(async_task, 1, 'Downloading Hyper-SD components ...')
# loras += [(modules.config.downloading_sdxl_hyper_sd_lora(), 0.8)]
# if refiner_model_name != 'None':
# print(f'Refiner disabled in Hyper-SD mode.')
# refiner_model_name = 'None'
# sampler_name = 'dpmpp_sde_gpu'
# scheduler_name = 'karras'
# sharpness = 0.0
# guidance_scale = 1.0
# adaptive_cfg = 1.0
# refiner_switch = 1.0
# adm_scaler_positive = 1.0
# adm_scaler_negative = 1.0
# adm_scaler_end = 0.0
# elif performance_selection == Performance.HYPER_SD8:
# print('Enter Hyper-SD8 mode.')
# progressbar(async_task, 1, 'Downloading Hyper-SD components ...')
# loras += [(modules.config.downloading_sdxl_hyper_sd_cfg_lora(), 0.3)]
# sampler_name = 'dpmpp_sde_gpu'
# scheduler_name = 'normal'
else:
print('Enter Hyper-FF mode.')
#progressbar(async_task, 1, 'Downloading Hyper-SD components ...')
loras += [("Hyper-SDXL-8steps-lora.safetensors", 0.5)]
sampler_name = 'dpmpp_3m_sde_gpu'
scheduler_name = 'sgm_uniform'
临时解决的办法,手动下载Hyper-SDXL-8steps-lora.safetensors,然后修改代码成这样
新版9步出图效果非常好,:)
Fixed in https://github.com/lllyasviel/Fooocus/pull/2959, sorry.
god damn, i should really test more... there still is an issue in image preparation for NSFW, but other than that it's fine
@IPv6 so sorry, moved the tag 2.4.0 to the fixed NSFW version now
@mashb1t no problem at all, happens with anyone. Thanks for a quick fix! And your efforts in supporting this project, Fooocus is inspiring :)
Checklist
What happened?
Tried to use 2.4.0-rc1 in colab but got following error:
Steps to reproduce the problem
What should have happened?
Should work
What browsers do you use to access Fooocus?
Mozilla Firefox
Where are you running Fooocus?
None
What operating system are you using?
No response
Console logs
Additional information
No response