lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
41.79k stars 5.98k forks source link

[Bug]: Gradio link is NOT generating though the gradio share API is operational #2852

Closed Arjun91221 closed 7 months ago

Arjun91221 commented 7 months ago

Checklist

What happened?

Gradio link is NOT generating though the gradio share API is operational

Steps to reproduce the problem

Not able to see gradio link

What should have happened?

I don't know..It's all of sudden

What browsers do you use to access Fooocus?

Google Chrome

Where are you running Fooocus?

Locally

What operating system are you using?

windows 10

Console logs

Requirement already satisfied: cffi>=1.9.1 in /usr/local/lib/python3.10/dist-packages (from pygit2==1.12.2) (1.16.0)
Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.9.1->pygit2==1.12.2) (2.22)
Installing collected packages: pygit2
Successfully installed pygit2-1.12.2
/content
Cloning into 'Fooocus'...
remote: Enumerating objects: 8442, done.
remote: Counting objects: 100% (56/56), done.
remote: Compressing objects: 100% (40/40), done.
remote: Total 8442 (delta 26), reused 34 (delta 16), pack-reused 8386
Receiving objects: 100% (8442/8442), 51.85 MiB | 29.03 MiB/s, done.
Resolving deltas: 100% (4668/4668), done.
/content/Fooocus
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py', '--share', '--always-high-vram']
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Fooocus version: 2.3.1
Error checking version for torchsde: No package metadata was found for torchsde
Installing requirements
[Cleanup] Attempting to delete content of temp dir /tmp/fooocus
[Cleanup] Cleanup successful
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth" to /content/Fooocus/models/vae_approx/xlvaeapp.pth

100% 209k/209k [00:00<00:00, 4.39MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/vaeapp_sd15.pt" to /content/Fooocus/models/vae_approx/vaeapp_sd15.pth

100% 209k/209k [00:00<00:00, 4.15MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/xl-to-v1_interposer-v3.1.safetensors" to /content/Fooocus/models/vae_approx/xl-to-v1_interposer-v3.1.safetensors

100% 6.25M/6.25M [00:00<00:00, 44.6MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/fooocus_expansion.bin" to /content/Fooocus/models/prompt_expansion/fooocus_expansion/pytorch_model.bin

100% 335M/335M [00:02<00:00, 150MB/s]
Downloading: "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors" to /content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors

100% 6.62G/6.62G [01:04<00:00, 110MB/s]
Downloading: "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors" to /content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors

100% 47.3M/47.3M [00:00<00:00, 110MB/s]
Total VRAM 15102 MB, total RAM 12979 MB
Set vram state to: HIGH_VRAM
Always offload VRAM
Device: cuda:0 Tesla T4 : native
VAE dtype: torch.float32
Using pytorch cross attention
2024-05-02 10:26:21.880269: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-05-02 10:26:21.880333: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-05-02 10:26:22.017376: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-05-02 10:26:25.200193: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Refiner unloaded.
Running on local URL:  http://127.0.0.1:7865

Could not create share link. Missing file: /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2. 

Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: 

1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64
2. Rename the downloaded file to: frpc_linux_amd64_v0.2
3. Move the file to this location: /usr/local/lib/python3.10/dist-packages/gradio
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
loaded straight to GPU
Requested to load SDXL
Loading 1 new model
Base model loaded: /content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.66 seconds
Started worker with PID 908
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865

Additional information

No response

GeorgesGraire commented 7 months ago

Encountering the exact same issue.

Screenshot 2024-05-02 at 12 39 50

Arjun91221 commented 7 months ago

@GeorgesGraire Oh, I think, The repository owner has been changed. Is he making a problem with kind of experiments in newer versions?

GeorgesGraire commented 7 months ago

@Arjun91221 Maybe! But I hope it will be fixed soon since I can't use the localhost link it never worked for me so I always relied on the gradio.live one.

KirtiKousik commented 7 months ago

@GeorgesGraire try putting this code before your original code.

!wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb !dpkg -i cloudflared-linux-amd64.deb

GeorgesGraire commented 7 months ago

@GeorgesGraire try putting this code before your original code.

!wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb

!dpkg -i cloudflared-linux-amd64.deb In Google Collab?

KirtiKousik commented 7 months ago

yes

KirtiKousik commented 7 months ago

This should be the complete code

!wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb !dpkg -i cloudflared-linux-amd64.deb

!pip install pygit2==1.12.2 %cd /content !git clone https://github.com/KirtiKousik/Fooocus.git %cd /content/Fooocus !python entry_with_update.py --share --always-high-vram

KirtiKousik commented 7 months ago

Gradio link is down btw. after it starts running again, wait for it to settle, then it can generate gradio share link. Currently it's not stable. That's why it won't generate share link.

https://status.gradio.app/793595965

magicFeirl commented 7 months ago

I am currently using ngrok to solve this problem Example notebook: https://colab.research.google.com/drive/1LttWZhwrbcaWssT6rglq4xgDm9zuRaqv?usp=sharing

demigit23 commented 7 months ago

Same problem :(

GeorgesGraire commented 7 months ago

I am currently using ngrok to solve this problem Example notebook: https://colab.research.google.com/drive/1LttWZhwrbcaWssT6rglq4xgDm9zuRaqv?usp=sharing

Thanks I will use yours for now, but the issue is still here even with the gradio service status being up.

Screenshot 2024-05-02 at 14 38 18

demigit23 commented 7 months ago

I am currently using ngrok to solve this problem Example notebook

Hello, do I need to make any modifications to that notebook? I tried to use it but I get this error:

_*File "", line 20 print(f" ngrok tunnel \"{publicurl}\" -> \"http://127.0.0.1:{port}\") ^ SyntaxError: unterminated string literal (detected at line 20)**

IPv6 commented 7 months ago

Hello, do I need to make any modifications to that notebook? I tried to use it but I get this error:

Use print(f" * ngrok tunnel '{public_url}' -> 'http://127.0.0.1:{port}'")

KirtiKousik commented 7 months ago

it works at least 1 hour after share api being active. Share api needs at least one hour of stable uptime.

GodFazer commented 7 months ago

working now. check it

AshleyRomano commented 7 months ago

it stopped working again.

igninjaz commented 7 months ago

so many issues lately.

mashb1t commented 7 months ago

Yeah, there's currently nothing we can do. Luckily it's not Google banning public access to Fooocus on Colab, so there's that.

I'll add a note for the readme as soon as availability reaches 95% (last 7 days), we're still at 96.352% (last 7 days) and 99.146% (last 30 days), but as of their details in https://status.gradio.app/793595965 the Gradio Share API seems to have recovered from their infrastructural issues.

AshleyRomano commented 7 months ago

it is not working again, it was working fine in evening

IPv6 commented 7 months ago

it is not working again, it was working fine in evening

You need to wait for gradio to wake up. The issue not in Fooocus //