lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
41.68k stars 5.95k forks source link

Inpaint still not working in Fooocus Colab Pro #2297

Closed Goliat48 closed 9 months ago

Goliat48 commented 9 months ago

Read Troubleshoot

[x] I confirm that I have read the Troubleshoot guide before making this issue.

Describe the problem A clear and concise description of what the bug is.

Unfortunately, the "Inpaint" function in Fooocus still does not work in any of its three possibilities in Google Colab Pro (paid version). I noted that three days ago.

I generate a simple image with the "man in a street" prompt with the proprietary version of Fooocus that is on Google and that does not support saving changes.

I place the generated image in the "Inpainting" window and after masking a small area and giving it the "Generate" command (and an additional prompt if applicable), the small symbol begins to spin indefinitely without generating any image.

Until a few days ago this worked perfectly and I have created hundreds of images without any problem.

I have no out of memory problem in Colab, as can be seen from the attached data.

I am new to AI and I am not a programmer but I have had no problem until now using Foocus either from the Colab resident version or from copies in Google Drive, with the juggernaut model or with other models, refiners or Loras

What could be happening?

Thanks in advance for any help.

Full Console Log Paste the full console log here. You will make our job easier if you give a full log.

Inpaint_failure-1 Inpaint_failure-2 Inpaint_failure-3

mashb1t commented 9 months ago

Can you please provide the full terminal command output of colab, not only a screenshot of a specific excerpt? This would be very helpful for further analysis. Thanks!

Goliat48 commented 9 months ago

Can you please provide the full terminal command output of colab, not only a screenshot of a specific excerpt? This would be very helpful for further analysis. Thanks!

Of course. Thanks in advance

!pip install pygit2==1.12.2 %cd /content !git clone https://github.com/lllyasviel/Fooocus.git %cd /content/Fooocus !python entry_with_update.py --share

Collecting pygit2==1.12.2 Downloading pygit2-1.12.2-cp310-cp310-

manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.9/4.9

MB 16.8 MB/s eta 0:00:00 Requirement already satisfied: cffi>=1.9.1 in /usr/local/lib/python3.10/dist-packages (from

pygit2==1.12.2) (1.16.0) Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from

cffi>=1.9.1->pygit2==1.12.2) (2.21) Installing collected packages: pygit2 Successfully installed pygit2-1.12.2 /content Cloning into 'Fooocus'... remote: Enumerating objects: 5405, done. remote: Counting objects: 100% (102/102), done. remote: Compressing objects: 100% (72/72), done. remote: Total 5405 (delta 48), reused 58 (delta 29), pack-reused 5303 Receiving objects: 100% (5405/5405), 32.52 MiB | 41.12 MiB/s, done. Resolving deltas: 100% (3038/3038), done. /content/Fooocus Already up-to-date Update succeeded. [System ARGV] ['entry_with_update.py', '--share'] Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Fooocus version: 2.1.865 Error checking version for torchsde: No package metadata was found for torchsde Installing requirements Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth" to

/content/Fooocus/models/vae_approx/xlvaeapp.pth

100% 209k/209k [00:00<00:00, 4.36MB/s] Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/vaeapp_sd15.pt" to

/content/Fooocus/models/vae_approx/vaeapp_sd15.pth

100% 209k/209k [00:00<00:00, 4.09MB/s] Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/xl-to-v1_interposer-

v3.1.safetensors" to /content/Fooocus/models/vae_approx/xl-to-v1_interposer-

v3.1.safetensors

100% 6.25M/6.25M [00:00<00:00, 44.0MB/s] Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/fooocus_expansion.bin"

to /content/Fooocus/models/prompt_expansion/fooocus_expansion/pytorch_model.bin

100% 335M/335M [00:02<00:00, 156MB/s] Downloading:

"https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusio

n.safetensors" to

/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors

100% 6.62G/6.62G [00:47<00:00, 149MB/s] Downloading: "https://huggingface.co/stabilityai/stable-diffusion-xl-base-

1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors" to

/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors

100% 47.3M/47.3M [00:00<00:00, 121MB/s] Running on local URL: http://127.0.0.1:7865 Total VRAM 15102 MB, total RAM 52218 MB Set vram state to: NORMAL_VRAM Always offload VRAM Device: cuda:0 Tesla T4 : native VAE dtype: torch.float32 Using pytorch cross attention 2024-02-19 13:16:27.374058: E

external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN

factory: Attempting to register factory for plugin cuDNN when one has already been

registered 2024-02-19 13:16:27.374106: E

external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT

factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-02-19 13:16:27.375587: E

external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS

factory: Attempting to register factory for plugin cuBLAS when one has already been

registered 2024-02-19 13:16:28.659288: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38]

TF-TRT Warning: Could not find TensorRT Refiner unloaded. model_type EPS UNet ADM Dimension 2816 Running on public URL: https://16ba164eb191a5502c.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run

gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces) Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection',

'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids',

'cond_stage_model.clip_l.logit_scale'} Base model loaded:

/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0],

['None', 1.0], ['None', 1.0], ['None', 1.0]] for model

[/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors]. Loaded LoRA [/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors]

for UNet [/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors]

with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cuda:0, use_fp16 = True. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 0.61 seconds App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or

https://16ba164eb191a5502c.gradio.live

mashb1t commented 9 months ago

@Goliat48 are you certain this is the terminal log directly after you've encountered the unresponsive fromtend and/or the spinner?

Goliat48 commented 9 months ago

@Goliat48 are you certain this is the terminal log directly after you've encountered the unresponsive fromtend and/or the spinner?

Well. I´ve tried that many times. Each time happens the same. The post where I attached the 3 screenshots correspond to a previous "run" . The terminal log I sent now was another new one because I stopped the execution and ran it later in order to send you the log If neccesary I might run it again and resend the terminal log and all neccesary screen shots Shall I do that?

mashb1t commented 9 months ago

@Goliat48 yes, please. One more hint: inpainting works best if you describe what you want to have, not by writing "remove" somebody. Stable Diffusion and inpainting models both do not understand natural language prompts.

EDIT: I can't reproduce the issue on Colab Free screencapture-381bd1b9f6ec497c7f-gradio-live-2024-02-19-15_32_44 screencapture-colab-research-google-github-lllyasviel-Fooocus-blob-main-fooocus-colab-ipynb-2024-02-19-15_32_27

Goliat48 commented 9 months ago

@Goliat48 yes, please. One more hint: inpainting works best if you describe what you want to have, not by writing "remove" somebody. Stable Diffusion and inpainting models both do not understand natural language prompts.

EDIT: I can't reproduce the issue on Colab Free screencapture-381bd1b9f6ec497c7f-gradio-live-2024-02-19-15_32_44 screencapture-colab-research-google-github-lllyasviel-Fooocus-blob-main-fooocus-colab-ipynb-2024-02-19-15_32_27

Before I tried very different additional messages in inpainting several type of images, some complex and some very simple like "red neon". Neither worked and the Skip and Stop buttons stopped working. I will try again and send you the results. Thanks for your effort.

Goliat48 commented 9 months ago

Still is not working. Here you have the new "run" log:

!pip install pygit2==1.12.2 %cd /content !git clone https://github.com/lllyasviel/Fooocus.git %cd /content/Fooocus !python entry_with_update.py --share

Already up-to-date Update succeeded. [System ARGV] ['entry_with_update.py', '--share'] Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Fooocus version: 2.1.865 Running on local URL: http://127.0.0.1:7865 Total VRAM 15102 MB, total RAM 52218 MB Set vram state to: NORMAL_VRAM Always offload VRAM Device: cuda:0 Tesla T4 : native VAE dtype: torch.float32 Using pytorch cross attention 2024-02-19 14:47:34.836291: E

external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN

factory: Attempting to register factory for plugin cuDNN when one has already been

registered 2024-02-19 14:47:34.836343: E

external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT

factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-02-19 14:47:34.837904: E

external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS

factory: Attempting to register factory for plugin cuBLAS when one has already been

registered 2024-02-19 14:47:35.902636: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38]

TF-TRT Warning: Could not find TensorRT Running on public URL: https://ad7f39a8f873587224.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run

gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces) Refiner unloaded. model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection',

'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids',

'cond_stage_model.clip_l.logit_scale'} Base model loaded:

/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0],

['None', 1.0], ['None', 1.0], ['None', 1.0]] for model

[/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors]. Loaded LoRA [/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors]

for UNet [/content/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors]

with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cuda:0, use_fp16 = True. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 0.59 seconds App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or

https://ad7f39a8f873587224.gradio.live [Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 2 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 4.0 [Parameters] Seed = 4952164631165379228 [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 30 - 15 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. [Fooocus] Processing prompts ... [Fooocus] Preparing Fooocus text #1 ... [Prompt Expansion] Man in a street, holy, full perfect, detailed, cinematic, directed, burning,

intense, intricate, elegant, light, highly detail, incredible quality, very inspirational, thought,

epic, artistic, winning, gorgeous, symmetry, illuminated, amazing, beautiful, peaceful, cute,

enhanced, vibrant, brilliant, color, coherent, creative, wonderful, pretty, focused [Fooocus] Preparing Fooocus text #2 ... [Prompt Expansion] Man in a street, dynamic composition, dramatic, cinematic, rich deep

colors, ambient background, sharp focus, elegant, intricate, highly detailed, creative, vibrant,

fine detail, open color, great light, epic, artistic, innocent, beautiful, stunning, symmetry,

iconic, cool, imposing, complex, enhanced, professional, clear, awesome, brilliant, colorful,

enormous [Fooocus] Encoding positive #1 ... [Fooocus Model Management] Moving model(s) has taken 0.11 seconds [Fooocus] Encoding positive #2 ... [Fooocus] Encoding negative #1 ... [Fooocus] Encoding negative #2 ... [Parameters] Denoising Strength = 1.0 [Parameters] Initial Latent shape: Image Space (896, 1152) Preparation time: 3.05 seconds [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 2.49 seconds 100% 30/30 [00:25<00:00, 1.17it/s] Requested to load AutoencoderKL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 0.30 seconds Image generated with private log at: /content/Fooocus/outputs/2024-02-19/log.html Generating and saving time: 31.23 seconds [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 1.73 seconds 100% 30/30 [00:27<00:00, 1.11it/s] Requested to load AutoencoderKL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 0.28 seconds Image generated with private log at: /content/Fooocus/outputs/2024-02-19/log.html Generating and saving time: 31.82 seconds Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 0.75 seconds Total time: 70.16 seconds (1006, None)

mashb1t commented 9 months ago

@Goliat48 Error 1006 indicates a swap issue, see https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md#error-1006 We're both using Colab on T4 instances... Do you have any additional settings or what do you assume is the difference between me successfully being able to use Colab Free and your setup failing to run on Colab Pro?

eddyizm commented 9 months ago

@Goliat48 Your colab screenshot is showing the system ram maxing out, hence the red, and that causes it to crash. Note that you had memory left in the cpu. I was having similar issues and was able to fix it by adding these flags (note, I am also using a turbo checkpoint and not the defaults in the app) --always-high-vram --all-in-fp16

mashb1t commented 9 months ago

@eddyizm is it possible you've mistaken my Colab Free screenshot for the one of @Goliat48? RAM doesn't seem to be an issue, see https://github.com/lllyasviel/Fooocus/issues/2297#issue-2142172216 (9,8/51,0)

eddyizm commented 9 months ago

@mashb1t I sure did. Oops!

Goliat48 commented 9 months ago

This mysterious problem is being bounded.

We have verified that Fooocus Inpaint works perfectly on my son's computer in both Colab Free and Colab Pro using the Chrome browser and Windows 10 in both cases.

On my computer, paradoxically, inpaint works using Firefox but it doesn't work when I use Chrome (I use Windows 7).

I believe, therefore, that the problem is something attributable to my Chrome installation.

The strange thing is that I have not made any configuration changes lately and until a few days ago I have generated hundreds of images without any problem.

I will have to review this configuration, since apparently something has changed in my Chrome that prevents Fooocus Inpaint engine from functioning normally.

If I find out, I'll comment on it here.

I want to thank @mashb1t for all the help he is giving me, as well as the other people who have participated or participate in the thread.

mashb1t commented 9 months ago

Alright, closing for now. Feel free to provide further feedback!

Goliat48 commented 9 months ago

The culprit has been discovered!

I just checked that when I uninstalled an application called "AntiTrack" from the AVG software (antivirus and security), Fooocus Inpaint Engine immediately started working normally in Chrome, always from Colab Pro.

This software had been installed on my PC few days ago.

Thank you all.