lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
41.02k stars 5.76k forks source link

1006 Error, All input image doesn't work #1231

Closed alex57280 closed 10 months ago

alex57280 commented 10 months ago

Describe the problem Hello Everyone, try for hours to find a solution, I have a problem when I tried to use any of the "input Image" possibility, that never work. When I tried to use it I have a error problem "connection errored out"

1

, In the commande I have a 1006 writting, 3

and the time never stop.

2

I remove my VPN and antivirus, I try chrome, edge, I try to install in other folder, to put the image I want to use in other file doesn't work, I read and apply all the step of Read Me.

If there any one can help me ? Thanks

Full Console Log

C:\Users\alexa\Documents\AI>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset realistic Already up-to-date Update succeeded. [System ARGV] ['Fooocus\entry_with_update.py', '--preset', 'realistic'] Loaded preset: C:\Users\alexa\Documents\AI\Fooocus\presets\realistic.json Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Fooocus version: 2.1.824 Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch(). Total VRAM 4096 MB, total RAM 65256 MB Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --normalvram Set vram state to: LOW_VRAM Disabling smart memory management Device: cuda:0 NVIDIA GeForce RTX 3050 Ti Laptop GPU : native VAE dtype: torch.bfloat16 Using pytorch cross attention Refiner unloaded. model_type EPS adm 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra keys {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} Base model loaded: C:\Users\alexa\Documents\AI\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors Request to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [C:\Users\alexa\Documents\AI\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors]. Loaded LoRA [C:\Users\alexa\Documents\AI\Fooocus\models\loras\SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for UNet [C:\Users\alexa\Documents\AI\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors] with 788 keys at weight 0.25. Loaded LoRA [C:\Users\alexa\Documents\AI\Fooocus\models\loras\SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for CLIP [C:\Users\alexa\Documents\AI\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors] with 264 keys at weight 0.25. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cpu, use_fp16 = False. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 1.19 seconds App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 1006

Thanks in Advance

stubkan commented 10 months ago

Make sure you are adding clean jpg format images. If you use a different format it can throw an error.

alex57280 commented 10 months ago

Thanks @stubkan I tried but doesn't work. And the funning thing is all Ai picture you made export in PNG so you need to transform to Jpg and doesn't work. So impossible to drag and drop ....

lllyasviel commented 10 months ago

1006 is a swap problem see also https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md

please do not spread misinfomation and wrong guides

alex57280 commented 10 months ago

@lllyasviel thanks but doesn't work, try with and without the automatically managing ... I have way more than 40Gb I have like 200GB. I have a lot of ram available... I try different image, jpg, png, a image from fooocus or not from fooocus doesn't work. I even try in google colab and have the same problem

digitpmedia commented 10 months ago

@lllyasviel thanks but doesn't work, try with and without the automatically managing ... I have way more than 40Gb I have like 200GB. I have a lot of ram available... I try different image, jpg, png, a image from fooocus or not from fooocus doesn't work. I even try in google colab and have the same problem

Good day, I also experiencing 1006 error and timeouts today on Win 10 with GeForce GTX 1060 6GB, when I experimenting "Image Prompt" features. I know the earlier day Fooocus can work fine.

I found the problems are, when you select features that need to download new model that not yet exists on your local drive, and at the same time your internet connection and/or HDD drive is busy (like Windows Update/telemetry/defender running), then the process can be stalled. in some case even ctrl-c, ctrl-d, ctrl-z cannot end the process.

after the required models based on your features selection on web already locally downloaded, everything should be smooth.

Here my log with 1006 error, after the the internet download speed and HDD is not busy anymore, it can smoothly run and fetch required model(s). I also change the Swap file from 40GB to 50GB (51200) both minimum and maximum, so the OS will not experiencing performance drop when increasing the swap. swap allocation always make your system terribly slow..

Please also note that even your web browser timeout, if the CPU task of Python, memory, and GPU still has activity, you can expect your result image will be found on output folder, after it done.

just sharing my experience, I hope can help you..

D:\Fooocus>run_anime.bat

D:\Fooocus>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset anime --listen Already up-to-date Update succeeded. [System ARGV] ['Fooocus\entry_with_update.py', '--preset', 'anime', '--listen'] Loaded preset: D:\Fooocus\Fooocus\presets\anime.json Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Fooocus version: 2.1.855 Running on local URL: http://0.0.0.0:1234

To create a public link, set share=True in launch(). Total VRAM 6144 MB, total RAM 16338 MB Set vram state to: NORMAL_VRAM Always offload VRAM Device: cuda:0 NVIDIA GeForce GTX 1060 6GB : native VAE dtype: torch.float32 Using pytorch cross attention model_type EPS UNet ADM Dimension 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} Refiner model loaded: D:\Fooocus\Fooocus\models\checkpoints\DreamShaper_8_pruned.safetensors model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} Base model loaded: D:\Fooocus\Fooocus\models\checkpoints\bluePencilXL_v050.safetensors Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.5], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [D:\Fooocus\Fooocus\models\checkpoints\bluePencilXL_v050.safetensors]. Loaded LoRA [D:\Fooocus\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [D:\Fooocus\Fooocus\models\checkpoints\bluePencilXL_v050.safetensors] with 788 keys at weight 0.5. Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.5], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [D:\Fooocus\Fooocus\models\checkpoints\DreamShaper_8_pruned.safetensors]. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cuda:0, use_fp16 = False. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 2.36 seconds App started successful. Use the app with http://localhost:1234/ or 0.0.0.0:1234 1006 [Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 2 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 7.0 [Parameters] Seed = 7922594446021288133 [Fooocus] Downloading control models ... Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/control-lora-canny-rank128.safetensors" to D:\Fooocus\Fooocus\models\controlnet\control-lora-canny-rank128.safetensors

100%|███████████████████████████████████████████████████████████████████████████████| 377M/377M [02:33<00:00, 2.57MB/s] [Fooocus] Loading control models ... [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 30 - 20 [Fooocus] Initializing ... [Fooocus] Loading models ... [Fooocus] Processing prompts ... [Fooocus] Preparing Fooocus text #1 ... [Prompt Expansion] 1girl, eye glasses, smart, cute, full detailed, amazing, elegant, intricate, highly detail, professional focus, cool, charming, attractive, enhanced, very handsome, best, dramatic, background, quiet, relaxed, loving, delicate, beautiful, coherent, lovely, marvelous, fabulous, magical, dazzling, colorful, focused, wonderful, brilliant, symmetry, flowing [Fooocus] Preparing Fooocus text #2 ... [Prompt Expansion] 1girl, eye glasses, smart, cute, elegant, confident, highly detailed, dramatic light, sharp focus, cool, professional, charming, expressive, beautiful, attractive, intricate background, designed, rich deep colors, ambient perfect, dynamic composition, epic, fine detail, very inspirational, stunning, inspiring, gorgeous, creative, appealing, artistic, pure, best, colorful [Fooocus] Encoding positive #1 ... [Fooocus Model Management] Moving model(s) has taken 0.34 seconds [Fooocus] Encoding positive #2 ... [Fooocus] Encoding negative #1 ... [Fooocus] Encoding negative #2 ... [Fooocus] Image processing ... [Parameters] Denoising Strength = 1.0 [Parameters] Initial Latent shape: Image Space (1152, 896) Preparation time: 194.80 seconds [Sampler] refiner_swap_method = vae [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model loading in lowvram mode 3202.1086235046387 [Fooocus Model Management] Moving model(s) has taken 597.49 seconds 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:36<00:00, 7.81s/it] Fooocus VAE-based swap. Requested to load Interposer Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 0.22 seconds Requested to load BaseModel Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 71.13 seconds 100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:52<00:00, 5.25s/it] Requested to load AutoencoderKL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 7.17 seconds Image generated with private log at: D:\Fooocus\Fooocus\outputs\2023-12-26\log.html Generating and saving time: 937.33 seconds [Sampler] refiner_swap_method = vae [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model

digitpmedia commented 10 months ago

@lllyasviel thanks but doesn't work, try with and without the automatically managing ... I have way more than 40Gb I have like 200GB. I have a lot of ram available... I try different image, jpg, png, a image from fooocus or not from fooocus doesn't work. I even try in google colab and have the same problem

Good day, just add my finding after annoyed by unstable Image Prompt web interface.

I found the problem of 1006 error is the queue that used by websocket of gradio. And I can confirm that this 1006 error, not related to RAM and swap file configuration.

I fix it by change the default configuration of gdrio socket listener. Please find on your Fooocus installation folder: python_embeded/Lib/site-packages/gradio/networking.py

scroll to line 153, and change it to be like this:

ss

my addition is from line 162 to 167.

Without these additional setting, uploading image will crash the listener because the image(s) is encoded as base64 in websocket packet. somehow, with default gradio configuration, it fail decode the image(s) with non-sense 1006 error.

abrajamcm commented 10 months ago

@lllyasviel thanks but doesn't work, try with and without the automatically managing ... I have way more than 40Gb I have like 200GB. I have a lot of ram available... I try different image, jpg, png, a image from fooocus or not from fooocus doesn't work. I even try in google colab and have the same problem

Good day, just add my finding after annoyed by unstable Image Prompt web interface.

I found the problem of 1006 error is the queue that used by websocket of gradio. And I can confirm that this 1006 error, not related to RAM and swap file configuration.

I fix it by change the default configuration of gdrio socket listener. Please find on your Fooocus installation folder: python_embeded/Lib/site-packages/gradio/networking.py

scroll to line 153, and change it to be like this:

ss

my addition is from line 162 to 167.

Without these additional setting, uploading image will crash the listener because the image(s) is encoded as base64 in websocket packet. somehow, with default gradio configuration, it fail decode the image(s) with non-sense 1006 error.

Simply, WOW! It works!

You have fixed my issue and helped me to move on, I was getting really annoyed and frustrated because tried everything related to System Swap like troubleshoot says https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md

This fix should be part of the next build.

@digitpmedia thank you so much! you made my day!

This is the issue I reported. https://github.com/lllyasviel/Fooocus/issues/1594#issuecomment-1869340992

mashb1t commented 10 months ago

I found the problem of 1006 error is the queue that used by websocket of gradio. And I can confirm that this 1006 error, not related to RAM and swap file configuration.

@digitpmedia Thanks , much appreciated! @lllyasviel i couldn't reproduce the issue, but if you can confirm this we can also add a hint to the troubleshooting guide.

alex57280 commented 9 months ago

Thanks @digitpmedia it work so well and even faster !! Thanks

Jpru20 commented 9 months ago

@digitpmedia can you describe how i find the Fooocus installation folder: python_embeded/Lib/site-packages/gradio/networking.py

Jpru20 commented 9 months ago

@mashb1t can you describe how i find the Fooocus installation folder: python_embeded/Lib/site-packages/gradio/networking.py

mashb1t commented 9 months ago

@Jpru20 this is a closed issue and your question has nothing to do with this. Please move your request to a discussion at https://github.com/lllyasviel/Fooocus/discussions in the Q&A section.

digitpmedia commented 9 months ago

@digitpmedia can you describe how i find the Fooocus installation folder: python_embeded/Lib/site-packages/gradio/networking.py

Good day, that path exists if you use 7zip Fooocus release for Windows. The 7zip archive has embedded python inside. You wont find it at same path if you installing Fooocus using Anaconda/Linux/Python virtualenv way. if you doing that way, the file will be under pip repository folder.

image

Please always consult official Fooocus installation steps and troubleshooting guide. check and re-check..

my finding is just sharing experience what i do when some of us (maybe just little? the problem is not reproducible on other system) hit the wall without proper reason. not the bug of Fooocus anyway.

please apologize me if i made confusions.

Jpru20 commented 9 months ago

What if I'm running it on the free server through the given link? Does this fix need to be on the downloadable version?

On Fri, Jan 12, 2024, 19:03 digitpmedia @.***> wrote:

@digitpmedia https://github.com/digitpmedia can you describe how i find the Fooocus installation folder: python_embeded/Lib/site-packages/gradio/networking.py

Good day, that path exists if you use 7zip Fooocus release for Windows. The 7zip archive has embedded python inside. You wont find it at same path if you installing Fooocus using Anaconda/Linux/Python virtualenv way. if you doing that way, the file will be under pip repository folder.

image.png (view on web) https://github.com/lllyasviel/Fooocus/assets/61816040/5c8f8285-d5a9-43e5-923d-d0e0522a384e

Please always consult official Fooocus installation steps and troubleshooting guide. check and re-check..

my finding is just sharing experience what i do when some of us (maybe just little? the problem is not reproducible on other system) hit the wall without proper reason. not the bug of Fooocus anyway.

please apologize me if i made confusions.

— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/Fooocus/issues/1231#issuecomment-1889735532, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2DQXYITI3HOOY52PUECP6TYOF3HJAVCNFSM6AAAAABAJVXZ46VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBZG4ZTKNJTGI . You are receiving this because you were mentioned.Message ID: @.***>