mrhan1993 / Fooocus-API

FastAPI powered API for Fooocus
GNU General Public License v3.0
508 stars 135 forks source link

process killed without error message #236

Closed iomarcovalente closed 3 months ago

iomarcovalente commented 4 months ago

I am running Fooocus normally from the GUI and I never had issues. However I was trying to get this Fooocus -API working and after following instructions I get to the point that I send a text-to-image request to be processed and get the following after a long wait:

INFO:     127.0.0.1:57366 - "POST /v1/generation/text-to-image HTTP/1.1" 200 OK
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
loaded straight to GPU
Requested to load SDXL
Loading 1 new model
Base model loaded: /mnt/d/sandbox/Fooocus_win64_2-1-831/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/mnt/d/sandbox/Fooocus_win64_2-1-831/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [/mnt/d/sandbox/Fooocus_win64_2-1-831/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/mnt/d/sandbox/Fooocus_win64_2-1-831/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[1]    74475 killed     python main.py --log-level debug

As you can see even with log-level debug I don't get any meaningful Error log. Has it gone OOM? What could cause the issue there?

mrhan1993 commented 4 months ago

You can try increasing virtual memory,and try again