lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
41.31k stars 5.84k forks source link

[Bug]: 2.4.0-rc1: exception, when executing in colab #2956

Closed IPv6 closed 5 months ago

IPv6 commented 5 months ago

Checklist

What happened?

Tried to use 2.4.0-rc1 in colab but got following error:

Traceback (most recent call last):
  File "/content/Fooocus/modules/async_worker.py", line 977, in worker
    handler(task)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/content/Fooocus/modules/async_worker.py", line 297, in handler
    elif performance_selection == Performance.HYPER_SD8:
  File "/usr/lib/python3.10/enum.py", line 437, in __getattr__
    raise AttributeError(name) from None
AttributeError: HYPER_SD8. Did you mean: 'HYPER_SD'?
Total time: 0.02 seconds

Steps to reproduce the problem

  1. !git clone --depth 1 --branch 2.4.0-rc1 https://github.com/lllyasviel/Fooocus.git
  2. run the rest of colab as usual
  3. try to vary any image with defaul values

What should have happened?

Should work

What browsers do you use to access Fooocus?

Mozilla Firefox

Where are you running Fooocus?

None

What operating system are you using?

No response

Console logs

Collecting pygit2==1.12.2
  Downloading pygit2-1.12.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.9 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.9/4.9 MB 14.3 MB/s eta 0:00:00
Requirement already satisfied: cffi>=1.9.1 in /usr/local/lib/python3.10/dist-packages (from pygit2==1.12.2) (1.16.0)
Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.9.1->pygit2==1.12.2) (2.22)
Installing collected packages: pygit2
Successfully installed pygit2-1.12.2
/content
Cloning into 'Fooocus'...
remote: Enumerating objects: 639, done.
remote: Counting objects: 100% (639/639), done.
remote: Compressing objects: 100% (584/584), done.
remote: Total 639 (delta 36), reused 589 (delta 36), pack-reused 0
Receiving objects: 100% (639/639), 9.92 MiB | 23.30 MiB/s, done.
Resolving deltas: 100% (36/36), done.
Note: switching to '13599edb9b5066649c3ac31bb5a7b15403fd6297'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false

/content/Fooocus
The following additional packages will be installed:
  libaria2-0 libc-ares2
The following NEW packages will be installed:
  aria2 libaria2-0 libc-ares2
0 upgraded, 3 newly installed, 0 to remove and 45 not upgraded.
Need to get 1,513 kB of archives.
After this operation, 5,441 kB of additional disk space will be used.
Selecting previously unselected package libc-ares2:amd64.
(Reading database ... 121918 files and directories currently installed.)
Preparing to unpack .../libc-ares2_1.18.1-1ubuntu0.22.04.3_amd64.deb ...
Unpacking libc-ares2:amd64 (1.18.1-1ubuntu0.22.04.3) ...
Selecting previously unselected package libaria2-0:amd64.
Preparing to unpack .../libaria2-0_1.36.0-1_amd64.deb ...
Unpacking libaria2-0:amd64 (1.36.0-1) ...
Selecting previously unselected package aria2.
Preparing to unpack .../aria2_1.36.0-1_amd64.deb ...
Unpacking aria2 (1.36.0-1) ...
Setting up libc-ares2:amd64 (1.18.1-1ubuntu0.22.04.3) ...
Setting up libaria2-0:amd64 (1.36.0-1) ...
Setting up aria2 (1.36.0-1) ...
Processing triggers for man-db (2.10.2-1) ...
Processing triggers for libc-bin (2.35-0ubuntu3.4) ...
/sbin/ldconfig.real: /usr/local/lib/libtbbbind_2_5.so.3 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libtbbbind_2_0.so.3 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libtbb.so.12 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libtbbbind.so.3 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libtbbmalloc.so.2 is not a symbolic link

/sbin/ldconfig.real: /usr/local/lib/libtbbmalloc_proxy.so.2 is not a symbolic link

Update failed.
'refs/heads/HEAD'
Update succeeded.
[System ARGV] ['entry_with_update.py', '--preset', 'anime', '--share', '--always-high-vram', '--all-in-fp16']
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Fooocus version: 2.3.1
Error checking version for torchsde: No package metadata was found for torchsde
Installing requirements
Loaded preset: /content/Fooocus/presets/anime.json
[Cleanup] Attempting to delete content of temp dir /tmp/fooocus
[Cleanup] Cleanup successful
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth" to /content/Fooocus/models/vae_approx/xlvaeapp.pth

100% 209k/209k [00:00<00:00, 7.05MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/vaeapp_sd15.pt" to /content/Fooocus/models/vae_approx/vaeapp_sd15.pth

100% 209k/209k [00:00<00:00, 6.38MB/s]
Downloading: "https://huggingface.co/mashb1t/misc/resolve/main/xl-to-v1_interposer-v4.0.safetensors" to /content/Fooocus/models/vae_approx/xl-to-v1_interposer-v4.0.safetensors

100% 5.40M/5.40M [00:00<00:00, 64.8MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/fooocus_expansion.bin" to /content/Fooocus/models/prompt_expansion/fooocus_expansion/pytorch_model.bin

100% 335M/335M [00:01<00:00, 233MB/s]
Downloading: "https://huggingface.co/mashb1t/fav_models/resolve/main/fav/animaPencilXL_v310.safetensors" to /content/Fooocus/models/checkpoints/animaPencilXL_v310.safetensors

100% 6.46G/6.46G [00:36<00:00, 188MB/s]
Total VRAM 15102 MB, total RAM 12979 MB
Forcing FP16.
Set vram state to: HIGH_VRAM
Always offload VRAM
Device: cuda:0 Tesla T4 : native
VAE dtype: torch.float32
Using pytorch cross attention
2024-05-19 11:44:59.733781: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-05-19 11:44:59.733836: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-05-19 11:44:59.853031: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-05-19 11:45:02.462397: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Refiner unloaded.
IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.
--------
Running on local URL:  http://127.0.0.1:7865
Running on public URL: https://935c0860b21f158104.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
loaded straight to GPU
Requested to load SDXL
Loading 1 new model
Base model loaded: /content/Fooocus/models/checkpoints/animaPencilXL_v310.safetensors
VAE loaded: None
Request to load LoRAs [('None', 1.0), ('None', 1.0), ('None', 1.0), ('None', 1.0), ('None', 1.0)] for model [/content/Fooocus/models/checkpoints/animaPencilXL_v310.safetensors].
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.83 seconds
Started worker with PID 691
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or https://935c0860b21f158104.gradio.live
Traceback (most recent call last):
  File "/content/Fooocus/modules/async_worker.py", line 977, in worker
    handler(task)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/content/Fooocus/modules/async_worker.py", line 297, in handler
    elif performance_selection == Performance.HYPER_SD8:
  File "/usr/lib/python3.10/enum.py", line 437, in __getattr__
    raise AttributeError(name) from None
AttributeError: HYPER_SD8. Did you mean: 'HYPER_SD'?
Total time: 0.02 seconds
Keyboard interruption in main thread... closing server.
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 2199, in block_thread
    time.sleep(0.1)
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/content/Fooocus/entry_with_update.py", line 46, in <module>
    from launch import *
  File "/content/Fooocus/launch.py", line 140, in <module>
    from webui import *
  File "/content/Fooocus/webui.py", line 747, in <module>
    shared.gradio_root.launch(
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 2115, in launch
    self.block_thread()
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 2203, in block_thread
    self.server.close()
  File "/usr/local/lib/python3.10/dist-packages/gradio/networking.py", line 49, in close
    self.thread.join()
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
KeyboardInterrupt
Killing tunnel 127.0.0.1:7865 <> https://935c0860b21f158104.gradio.live

Additional information

No response

chouhai2018 commented 5 months ago

我也是这个问题, system: window 10 wsl ubuntu (venv) ai@DESKTOP-MSRIK3S:~/dev/Fooocus-2.4.0-rc1$ python entry_with_update.py --listen 0.0.0.0 Update failed. Repository not found at /home/ai/dev/Fooocus-2.4.0-rc1 Update succeeded. [System ARGV] ['entry_with_update.py', '--listen', '0.0.0.0'] Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Fooocus version: 2.4.0-rc1 [Cleanup] Attempting to delete content of temp dir /tmp/fooocus [Cleanup] Cleanup successful Total VRAM 8192 MB, total RAM 15916 MB Set vram state to: NORMAL_VRAM Always offload VRAM Device: cuda:0 NVIDIA GeForce RTX 3070 : native VAE dtype: torch.bfloat16 Using pytorch cross attention Refiner unloaded. Running on local URL: http://0.0.0.0:7865

To create a public link, set share=True in launch(). IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.

model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'} Base model loaded: /home/ai/dev/Fooocus-2.4.0-rc1/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors VAE loaded: None Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.1), ('None', 1.0), ('None', 1.0), ('None', 1.0), ('None', 1.0)] for model [/home/ai/dev/Fooocus-2.4.0-rc1/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors]. Loaded LoRA [/home/ai/dev/Fooocus-2.4.0-rc1/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/home/ai/dev/Fooocus-2.4.0-rc1/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cuda:0, use_fp16 = True. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 0.61 seconds Started worker with PID 1409476 App started successful. Use the app with http://localhost:7865/ or 0.0.0.0:7865 Traceback (most recent call last): File "/home/ai/dev/Fooocus-2.4.0-rc1/modules/async_worker.py", line 977, in worker handler(task) File "/home/ai/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "/home/ai/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "/home/ai/dev/Fooocus-2.4.0-rc1/modules/async_worker.py", line 297, in handler elif performance_selection == Performance.HYPER_SD8: File "/usr/lib/python3.10/enum.py", line 437, in getattr raise AttributeError(name) from None AttributeError: HYPER_SD8. Did you mean: 'HYPER_SD'? Total time: 0.02 seconds

20240519220212

chouhai2018 commented 5 months ago
    # elif performance_selection == Performance.HYPER_SD:
        # print('Enter Hyper-SD mode.')
        # progressbar(async_task, 1, 'Downloading Hyper-SD components ...')
        # loras += [(modules.config.downloading_sdxl_hyper_sd_lora(), 0.8)]

        # if refiner_model_name != 'None':
            # print(f'Refiner disabled in Hyper-SD mode.')

        # refiner_model_name = 'None'
        # sampler_name = 'dpmpp_sde_gpu'
        # scheduler_name = 'karras'
        # sharpness = 0.0
        # guidance_scale = 1.0
        # adaptive_cfg = 1.0
        # refiner_switch = 1.0
        # adm_scaler_positive = 1.0
        # adm_scaler_negative = 1.0
        # adm_scaler_end = 0.0

    # elif performance_selection == Performance.HYPER_SD8:
        # print('Enter Hyper-SD8 mode.')
        # progressbar(async_task, 1, 'Downloading Hyper-SD components ...')
        # loras += [(modules.config.downloading_sdxl_hyper_sd_cfg_lora(), 0.3)]

        # sampler_name = 'dpmpp_sde_gpu'
        # scheduler_name = 'normal'
    else:
        print('Enter Hyper-FF mode.')
        #progressbar(async_task, 1, 'Downloading Hyper-SD components ...')
        loras += [("Hyper-SDXL-8steps-lora.safetensors", 0.5)]

        sampler_name = 'dpmpp_3m_sde_gpu'
        scheduler_name = 'sgm_uniform'
临时解决的办法,手动下载Hyper-SDXL-8steps-lora.safetensors,然后修改代码成这样

20240519222708

20240519222735
chouhai2018 commented 5 months ago

新版9步出图效果非常好,:)

20240519223118
mashb1t commented 5 months ago

Fixed in https://github.com/lllyasviel/Fooocus/pull/2959, sorry.

mashb1t commented 5 months ago

You can now pull & use tag 2.4.0-rc2

mashb1t commented 5 months ago

god damn, i should really test more... there still is an issue in image preparation for NSFW, but other than that it's fine

mashb1t commented 5 months ago

@IPv6 so sorry, moved the tag 2.4.0 to the fixed NSFW version now

IPv6 commented 5 months ago

@mashb1t no problem at all, happens with anyone. Thanks for a quick fix! And your efforts in supporting this project, Fooocus is inspiring :)