Closed Job0115 closed 1 month ago
Hello, same issue here, but i don't think it's a problem o fooocus but Google Colab
i dont know but before hour it is working very well and After that it stopped working even though I changed the browser
exactly. same here. i tryed to change ip, browser, clear cache but nothing.
`Collecting pygit2==1.15.1 Downloading pygit2-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.3 kB) Requirement already satisfied: cffi>=1.16.0 in /usr/local/lib/python3.10/dist-packages (from pygit2==1.15.1) (1.17.0) Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.16.0->pygit2==1.15.1) (2.22) Downloading pygit2-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.1/5.1 MB 28.9 MB/s eta 0:00:00 Installing collected packages: pygit2 Successfully installed pygit2-1.15.1 /content Cloning into 'Fooocus'... remote: Enumerating objects: 6718, done. remote: Counting objects: 100% (31/31), done. remote: Compressing objects: 100% (21/21), done. remote: Total 6718 (delta 11), reused 22 (delta 8), pack-reused 6687 (from 1) Receiving objects: 100% (6718/6718), 33.26 MiB | 32.41 MiB/s, done. Resolving deltas: 100% (3873/3873), done. /content/Fooocus Already up-to-date Update succeeded. [System ARGV] ['entry_with_update.py', '--share', '--always-high-vram'] Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] Fooocus version: 2.5.5 Error checking version for torchsde: No package metadata was found for torchsde Installing requirements [Cleanup] Attempting to delete content of temp dir /tmp/fooocus [Cleanup] Cleanup successful Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth" to /content/Fooocus/models/vae_approx/xlvaeapp.pth
100% 209k/209k [00:00<00:00, 11.5MB/s] Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/vaeapp_sd15.pt" to /content/Fooocus/models/vae_approx/vaeapp_sd15.pth
100% 209k/209k [00:00<00:00, 10.9MB/s] Downloading: "https://huggingface.co/mashb1t/misc/resolve/main/xl-to-v1_interposer-v4.0.safetensors" to /content/Fooocus/models/vae_approx/xl-to-v1_interposer-v4.0.safetensors
100% 5.40M/5.40M [00:00<00:00, 101MB/s] Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/fooocus_expansion.bin" to /content/Fooocus/models/prompt_expansion/fooocus_expansion/pytorch_model.bin
100% 335M/335M [00:01<00:00, 338MB/s] Downloading: "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors" to /content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors
100% 6.62G/6.62G [00:36<00:00, 194MB/s] Downloading: "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors" to /content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors
Running on local URL: http://127.0.0.1:7865/ Running on public URL: https://29dc763309d152ee0f.gradio.live/
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy
from Terminal to deploy to Spaces (https://huggingface.co/spaces)
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
loaded straight to GPU
Requested to load SDXL
Loading 1 new model
Base model loaded: /content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors
VAE loaded: None
Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.1)] for model [/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.75 seconds
2024-08-28 12:12:19.299256: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-08-28 12:12:19.580787: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-08-28 12:12:19.659669: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-08-28 12:12:20.109015: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-28 12:12:22.149397: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Started worker with PID 324
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or https://29dc763309d152ee0f.gradio.live/`
nothing happen navigating to https://29dc763309d152ee0f.gradio.live/
so the problem is from google colab?
I don't know. let's wait some expert :)
The problem is on the gradio site not on colab...
The problem is on the gradio site not on colab...
I think you are right. gradio share link seems to be down at the moment https://status.gradio.app/
How long will this last?
You will have to wait until gradio is back up.
its working now thanks
Checklist
What happened?
the link is didn't want open and then is get this "504 Gateway Time-out" before is working very well
Steps to reproduce the problem
What should have happened?
the link must be working
What browsers do you use to access Fooocus?
Google Chrome, Microsoft Edge
Where are you running Fooocus?
Cloud (Google Colab)
What operating system are you using?
No response
Console logs
Additional information
No response