mrhan1993 / Fooocus-API

FastAPI powered API for Fooocus
GNU General Public License v3.0
567 stars 152 forks source link

运行main.py日志停止在“Loading 2 new models”一直没有继续了 #237

Closed p971607 closed 5 months ago

p971607 commented 6 months ago

我使用的是阿里云的DSW,执行!python main.py的输出: `[System ARGV] ['main.py'] Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] Fooocus-API version: 0.3.30 Fooocus exists and URL is correct. Fooocus checkout finished for 624f74a1ed78ea09467c856cef35aeee0af863f6. Load default preset failed. [Errno 2] No such file or directory: '/mnt/workspace/Fooocus-API/presets/default.json' [Fooocus-API] Task queue size: 100, queue history size: 0, webhook url: None Preload pipeline Total VRAM 22732 MB, total RAM 60450 MB INFO: Started server process [1475] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8888 (Press CTRL+C to quit) xformers version: 0.0.16rc425 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA A10 : VAE dtype: torch.float32 Using xformers cross attention Refiner unloaded. model_type EPS UNet ADM Dimension 2816 Using xformers attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using xformers attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Base model loaded: /mnt/workspace/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors Request to load LoRAs [['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/mnt/workspace/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors]. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cuda:0, use_fp16 = True. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models

一直停留在这里Loading 2 new models没有继续了。

这是我单独启动Fooocus项目的输出,包括图像生成: `!python entry_with_update.py --share

Already up-to-date Update succeeded. [System ARGV] ['entry_with_update.py', '--share'] Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] Fooocus version: 2.2.1 Total VRAM 22732 MB, total RAM 60450 MB xformers version: 0.0.16rc425 Set vram state to: NORMAL_VRAM Always offload VRAM Device: cuda:0 NVIDIA A10 : VAE dtype: torch.float32 Using xformers cross attention Refiner unloaded. Running on local URL: http://127.0.0.1:7865 model_type EPS UNet ADM Dimension 2816 Using xformers attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using xformers attention in VAE extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} Base model loaded: /mnt/workspace/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/mnt/workspace/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors]. Loaded LoRA [/mnt/workspace/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/mnt/workspace/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cuda:0, use_fp16 = True. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 0.72 seconds Started worker with PID 1259 App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or None [Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 2 [Parameters] ControlNet Softness = 0.25 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 4.0 [Parameters] Seed = 623780605310742431 [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 30 - 15 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. [Fooocus] Processing prompts ... [Fooocus] Preparing Fooocus text #1 ... [Prompt Expansion] 1girl, intricate, highly detailed, wonderful quality, light, glowing, sharp focus, pleasing color, symmetry, thought, fancy, fine elite, elegant, luxury, dramatic background, professional, artistic, beautiful, enchanted, cute, iconic, deep aesthetic, cool, epic, best, contemporary, elaborate, lucid, complex, quiet, creative, brilliant, lovely, marvelous [Fooocus] Preparing Fooocus text #2 ... [Prompt Expansion] 1girl, intricate, highly detailed, excellent composition, cinematic dramatic atmosphere, dynamic light, winning fine detail, handsome, elegant, novel, fancy, stylish, amazing, epic, stunning, gorgeous, color, illuminated, pretty, attractive, smart, luxury, elite, colorful background, spread, artistic, sharp focus, professional, best, fair, creative, fabulous, breathtaking [Fooocus] Encoding positive #1 ... [Fooocus] Encoding positive #2 ... [Fooocus] Encoding negative #1 ... [Fooocus] Encoding negative #2 ... [Parameters] Denoising Strength = 1.0 [Parameters] Initial Latent shape: Image Space (896, 1152) Preparation time: 1.88 seconds [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 1.49 seconds 100%|███████████████████████████████████████████| 30/30 [00:10<00:00, 2.85it/s] Requested to load AutoencoderKL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 0.23 seconds Image generated with private log at: /mnt/workspace/Fooocus/outputs/2024-03-07/log.html Generating and saving time: 13.14 seconds [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 0.97 seconds` 我的Fooocus是单独部署的,和Fooocus-API在同级目录下,config.txt按照readme放置了。 请问上面的问题出在哪里呢?

另外,这里和Fooocus的对不上,我也修改了下: 在您的main.py里的download_models函数里:我将: from modules.config import (path_checkpoints as modelfile_path, path_loras as lorafile_path,

改成了: from modules.config import (paths_checkpoints as modelfile_path, paths_loras as lorafile_path,

因为现在的Fooocus里的这两个path后是多了个“s”,我不清楚以前的版本是如何的,我是刚用的。

希望您能帮忙看下我上面的问题,谢谢~~~

p971607 commented 6 months ago

抱歉,我被loading这个单词误导了,对比了下输出,又执行了ping的调用,输出的是pong,服务应该是成功了,感谢~ 说明下,阅读您的代码如痴如醉啊,写的真好~~~~ 啊啊啊~~~~~ 美~~!! 开心~~

mrhan1993 commented 6 months ago

好了吗

p971607 commented 6 months ago

好了吗

好了。 不过我看您这个版本和Fooocus之间有差异提交啊,是可以合并到Fooocus里吗。

不相干的话痨:1,我用阿里的dsw折腾了一天多才发现它那个无法访问公网,api没法调用,会被重定向到阿里云的登录页,用ecs又要折腾,无奈之下准备买台式电脑了,ε=(´ο`*)))唉,现在电脑太破了;

2,期待Fooocus能支持layer diffusion:https://github.com/layerdiffusion/LayerDiffuse,我在那里提了个问题

3,sdxl3如果开源出来的话,都不知道用哪个了,我很期待绘画能够精准理解语义,sdxl3的演示表明那个很强啊。