lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
40.27k stars 5.59k forks source link

[Bug]: Colab Gradio share link is not loading #2815

Closed jrivgento closed 4 months ago

jrivgento commented 4 months ago

Checklist

What happened?

I have a problem with the colab, the link is not showing and I didn't do anything, I have GPU available yet and I don't know if is it because of any dependence thay may have been updated or something

Steps to reproduce the problem

Execute the cells and wait until it finishes loading

What should have happened?

It should have shown the link of the gradio

What browsers do you use to access Fooocus?

Google Chrome, Microsoft Edge, Android

Where are you running Fooocus?

Cloud (Google Colab)

What operating system are you using?

No response

Console logs

Collecting pygit2==1.12.2
  Downloading pygit2-1.12.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.9 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.9/4.9 MB 40.8 MB/s eta 0:00:00
Requirement already satisfied: cffi>=1.9.1 in /usr/local/lib/python3.10/dist-packages (from pygit2==1.12.2) (1.16.0)
Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.9.1->pygit2==1.12.2) (2.22)
Installing collected packages: pygit2
Successfully installed pygit2-1.12.2
/content
Cloning into 'Fooocus'...
remote: Enumerating objects: 5855, done.
remote: Counting objects: 100% (20/20), done.
remote: Compressing objects: 100% (19/19), done.
remote: Total 5855 (delta 2), reused 8 (delta 1), pack-reused 5835
Receiving objects: 100% (5855/5855), 32.68 MiB | 13.57 MiB/s, done.
Resolving deltas: 100% (3373/3373), done.
/content/Fooocus
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py', '--share', '--always-high-vram']
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Fooocus version: 2.3.1
[Cleanup] Attempting to delete content of temp dir /tmp/fooocus
[Cleanup] Cleanup successful
Total VRAM 15102 MB, total RAM 12979 MB
Set vram state to: HIGH_VRAM
Always offload VRAM
Device: cuda:0 Tesla T4 : native
VAE dtype: torch.float32
Using pytorch cross attention
2024-04-27 01:41:00.692313: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-04-27 01:41:00.692370: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-04-27 01:41:00.699828: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-04-27 01:41:02.830817: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Refiner unloaded.
Running on local URL:  http://127.0.0.1:7865
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
loaded straight to GPU
Requested to load SDXL
Loading 1 new model
Base model loaded: /content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.76 seconds
Started worker with PID 2010
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or None

AFTER TURNING OFF THE VM
/content/Fooocus
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py', '--share', '--always-high-vram']
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Fooocus version: 2.3.1
[Cleanup] Attempting to delete content of temp dir /tmp/fooocus
[Cleanup] Cleanup successful
Total VRAM 15102 MB, total RAM 12979 MB
Set vram state to: HIGH_VRAM
Always offload VRAM
Device: cuda:0 Tesla T4 : native
VAE dtype: torch.float32
Using pytorch cross attention
2024-04-27 01:41:00.692313: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-04-27 01:41:00.692370: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-04-27 01:41:00.699828: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-04-27 01:41:02.830817: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Refiner unloaded.
Running on local URL:  http://127.0.0.1:7865
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
loaded straight to GPU
Requested to load SDXL
Loading 1 new model
Base model loaded: /content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.76 seconds
Started worker with PID 2010
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or None
Error in sys.excepthook:
Traceback (most recent call last):
  File "/usr/lib/python3.10/linecache.py", line 46, in getlines
    return updatecache(filename, module_globals)
  File "/usr/lib/python3.10/linecache.py", line 137, in updatecache
    lines = fp.readlines()
  File "/usr/lib/python3.10/codecs.py", line 319, in decode
    def decode(self, input, final=False):
KeyboardInterrupt

Original exception was:
Traceback (most recent call last):
  File "/content/Fooocus/entry_with_update.py", line 46, in <module>
    from launch import *
  File "/content/Fooocus/launch.py", line 136, in <module>
    from webui import *
  File "/content/Fooocus/webui.py", line 716, in <module>
    shared.gradio_root.launch(
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1995, in launch
    self.share_url = networking.setup_tunnel(
  File "/usr/local/lib/python3.10/dist-packages/gradio/networking.py", line 182, in setup_tunnel
    response = requests.get(GRADIO_API_SERVER)
  File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 73, in get
    return request("get", url, params=params, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen
    response = self._make_request(
  File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 537, in _make_request
    response = conn.getresponse()
  File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 461, in getresponse
    httplib_response = super().getresponse()
  File "/usr/lib/python3.10/http/client.py", line 1375, in getresponse
    response.begin()
  File "/usr/lib/python3.10/http/client.py", line 318, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python3.10/http/client.py", line 279, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/usr/lib/python3.10/socket.py", line 705, in readinto
    return self._sock.recv_into(b)
  File "/usr/lib/python3.10/ssl.py", line 1303, in recv_into
    return self.read(nbytes, buffer)
  File "/usr/lib/python3.10/ssl.py", line 1159, in read
    return self._sslobj.read(len, buffer)
KeyboardInterrupt

Additional information

I'm using a modded colab, but the original code wasn't changed, I added some some codes for downloading custom models and testing but it shouldn't affect the original code, they are in separated cells. I tried with the original colab and it makes the same error image image image

demigit23 commented 4 months ago

I have the same problem for 2 hours

toticavalcanti commented 4 months ago

I have the same problem now.

psythemexx commented 4 months ago

Same here, this tend to be a new issue

KirtiKousik commented 4 months ago

I'm facing the same issue.

m42413148 commented 4 months ago

i'm facing the same issue here, is it bug or something else?

mashb1t commented 4 months ago

I can confirm the issue. After a first investigation and a restart of Fooocus on Colab this message is output:

Running on local URL:  http://127.0.0.1:7865/

Could not create share link. Missing file: /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2. 

Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: 

1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64
2. Rename the downloaded file to: frpc_linux_amd64_v0.2
3. Move the file to this location: /usr/local/lib/python3.10/dist-packages/gradio

Can you confirm that this also happens for you? I assume that Colab has restricted access to sharing for Fooocus or in general for Gradio, but this may also be a coincidence.

KirtiKousik commented 4 months ago

what's next? do we need to follow

I can confirm the issue. After a first investigation and a restart of Fooocus on Colab this message is output:

Running on local URL:  http://127.0.0.1:7865/

Could not create share link. Missing file: /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2. 

Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: 

1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64
2. Rename the downloaded file to: frpc_linux_amd64_v0.2
3. Move the file to this location: /usr/local/lib/python3.10/dist-packages/gradio

Can you confirm that this also happens for you? I assume that Colab has restricted access to sharing for Fooocus or in general for Gradio, but this may also be a coincidence.

Still Not Working

m42413148 commented 4 months ago

I can confirm the issue. After a first investigation and a restart of Fooocus on Colab this message is output:

Running on local URL:  http://127.0.0.1:7865/

Could not create share link. Missing file: /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2. 

Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: 

1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64
2. Rename the downloaded file to: frpc_linux_amd64_v0.2
3. Move the file to this location: /usr/local/lib/python3.10/dist-packages/gradio

Can you confirm that this also happens for you? I assume that Colab has restricted access to sharing for Fooocus or in general for Gradio, but this may also be a coincidence.

that massage not showing in mine, just say could not find tensoRT. is it related to this?

2024-04-27 04:21:17.095095: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-04-27 04:21:17.095147: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-04-27 04:21:17.204754: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-04-27 04:21:19.573184: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Refiner unloaded. Running on local URL: http://127.0.0.1:7865/ model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE

KirtiKousik commented 4 months ago

I can confirm the issue. After a first investigation and a restart of Fooocus on Colab this message is output:

Running on local URL:  http://127.0.0.1:7865/

Could not create share link. Missing file: /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2. 

Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: 

1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64
2. Rename the downloaded file to: frpc_linux_amd64_v0.2
3. Move the file to this location: /usr/local/lib/python3.10/dist-packages/gradio

Can you confirm that this also happens for you? I assume that Colab has restricted access to sharing for Fooocus or in general for Gradio, but this may also be a coincidence.

that massage not showing in mine, just say could not find tensoRT. is it related to this?

2024-04-27 04:21:17.095095: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-04-27 04:21:17.095147: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-04-27 04:21:17.204754: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-04-27 04:21:19.573184: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Refiner unloaded. Running on local URL: http://127.0.0.1:7865/ model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE

That is unrelated.

m42413148 commented 4 months ago

I can confirm the issue. After a first investigation and a restart of Fooocus on Colab this message is output:

Running on local URL:  http://127.0.0.1:7865/

Could not create share link. Missing file: /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2. 

Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: 

1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64
2. Rename the downloaded file to: frpc_linux_amd64_v0.2
3. Move the file to this location: /usr/local/lib/python3.10/dist-packages/gradio

Can you confirm that this also happens for you? I assume that Colab has restricted access to sharing for Fooocus or in general for Gradio, but this may also be a coincidence.

that massage not showing in mine, just say could not find tensoRT. is it related to this? 2024-04-27 04:21:17.095095: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-04-27 04:21:17.095147: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-04-27 04:21:17.204754: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-04-27 04:21:19.573184: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Refiner unloaded. Running on local URL: http://127.0.0.1:7865/ model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE

That is unrelated.

i would be careful downloading any files like that. it feels too much like a hacking attempt.

fooocus shouldn't be doing anything that activates threat protection.

Screenshot 2024-04-27 023157

yes but when run in google colab, that massage show too in mine now. now i'm confuse lol

poor7 commented 4 months ago

Today the Gradio Share API crashes periodically. https://status.gradio.app/793595965

jrivgento commented 4 months ago

Colab has been disconnecting me mostly today from the notebook by executing not allowed code which happened when I used Foocus code or I think it was because I tried to train some LoRAs on the same notebook and that may be is the unallowed code, and the whole folder where frpc file is from doesn't seem to exist

igninjaz commented 4 months ago

I have the same issue as well.

Kroy22 commented 4 months ago

same issue..

dmitryalexander commented 4 months ago

same issue..

Here's how you solve the problem.

Go to the google colab..

add these two lines at the top.

!wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb !dpkg -i cloudflared-linux-amd64.deb

comment out the launch.py line Screenshot 2024-04-27 080319

like this...

then it will clone the repo... Next, find the file in your google colab called webui.py Screenshot 2024-04-27 080547

at the very bottom of the webui.py file.... find the line that says: shared.gradio_root.launch(

and paste this code right BEFORE it.

import subprocess import threading import time import socket

def iframe_thread(port): while True: time.sleep(0.5) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) result = sock.connect_ex(('127.0.0.1', port)) if result == 0: break sock.close() print("\nFooocus finished loading, trying to launch cloudflared (if it gets stuck here cloudflared is having issues)\n") p = subprocess.Popen(["cloudflared", "tunnel", "--url", "http://127.0.0.1:{}".format(port)], stdout=subprocess.PIPE, stderr=subprocess.PIPE) for line in p.stderr: l = line.decode() if "trycloudflare.com" in l: print("This is the URL to access Fooocus:", l[l.find("https"):], end='')

port = 7865 # Replace with the port number used by Fooocus threading.Thread(target=iframe_thread, daemon=True, args=(port,)).start()

also change the code inside of shared.gradio_root.launch( TO shared.gradio_root.launch( inbrowser=args_manager.args.in_browser, server_name=args_manager.args.listen, server_port=args_manager.args.port, allowed_paths=[modules.config.path_outputs], blocked_paths=[constants.AUTH_FILENAME] )

original webui.py file Screenshot 2024-04-27 080723

modified webui.py file Screenshot 2024-04-27 080825

uncomment the entry.py line in google Collab Screenshot 2024-04-27 080838

Hit PLAY.

This is a WORKING HOT FIX.

KirtiKousik commented 4 months ago

It's working now. There was a problem with gradio share api.

mashb1t commented 4 months ago

Just tested, works again! Reason was the failing Gradio Share API, see comment https://github.com/lllyasviel/Fooocus/issues/2815#issuecomment-2080388848. Closing this issue, thank you for staying tuned.

vytaux commented 4 months ago

@dmitryalexander

same issue..

Here's how you solve the problem.

Bro I tried this, I'm getting something similar to try share=true in launch(), and given trycloudflared link isn't working. 🤔

And getting "App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or None" if running the regular version.

dmitryalexander commented 4 months ago

same issue..

Here's how you solve the problem.

Bro I tried this, I'm getting something similar to try share=true in launch(), and given trycloudflared link isn't working. 🤔

And getting "App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or None" if running the regular version.

Gradio is working now.

I'm not sure why the cloud flare thing didn't work for you, but it shouldn't matter.

you probably input the code incorrectly. python needs to be indented a very specific way which is why i pasted what it looks like

vytaux commented 4 months ago

@dmitryalexander yea I fixed the indentation as per your screenshots, don't worry. 😁

And I'm not sure I'm still not getting gradio link, it used to work perfect like 3 weeks ago. Tried reinstalling many times.

...
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/gdrive/MyDrive/fooocus_sd/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/content/gdrive/MyDrive/fooocus_sd/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/gdrive/MyDrive/fooocus_sd/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.57 seconds
Started worker with PID 4298
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or None
dmitryalexander commented 4 months ago

@dmitryalexander yea I fixed the indentation as per your screenshots, don't worry. 😁

And I'm not sure I'm still not getting gradio link, it used to work perfect like 3 weeks ago. Tried reinstalling many times.

...
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/gdrive/MyDrive/fooocus_sd/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/content/gdrive/MyDrive/fooocus_sd/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/gdrive/MyDrive/fooocus_sd/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.57 seconds
Started worker with PID 4298
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or None

so cloud flare is also not working?

add me on discord to try and get

DShvera commented 4 months ago

In fact, it's not working for me now, even I haven't changed code. code: image image

vytaux commented 4 months ago

@dmitryalexander hey man I'm not sure how to add u on discord... Fooocus doesn't seem to have one? I can't find your username as well.

dmitryalexander commented 4 months ago

@dmitryalexander hey man I'm not sure how to add u on discord... Fooocus doesn't seem to have one? I can't find your username as well.

my discord is metacosmos

dmitryalexander commented 4 months ago

In fact, it's not working for me now, even I haven't changed code. code: image image

you are missing the line !dpkg -i cloudflared-linux-amd64.deb

also if you didn't change the code in the webui.py file in your google colab it won't work

vytaux commented 4 months ago

dmitryalexander's fix works! Just use vpn if trycloudflare link doesn't load for some reason. 😅

DShvera commented 4 months ago

In fact, it's not working for me now, even I haven't changed code. code: image image

you are missing the line !dpkg -i cloudflared-linux-amd64.deb

also if you didn't change the code in the webui.py file in your google colab it won't work

thanks! Hope, I won't write more here 😅