mashb1t / Fooocus

Focus even better on prompting and generating
GNU General Public License v3.0
239 stars 39 forks source link

[Bug]: preset selection terminated on run #4

Closed ghost closed 10 months ago

ghost commented 10 months ago

Prerequisites

Describe the problem

When changing the preset dynamically during runtime, the application initiates the download of the corresponding model. However, even after the download is complete, the application does not effectively switch to the newly downloaded model. This requires an additional switch back to the initial preset and then to the desired preset for the changes to take effect. and on image generation it terminates as ^C but work when start the preset as argument.

Full console log output

Refiner unloaded.
model_type: EPS
UNet ADM Dimension: 2816
Loaded preset: /content/Fooocus/presets/anime.json
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra: {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: /content/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus  V1 Expansion: Vocab with 642 words.
Fooocus  Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.67 seconds
App started successful. Use the app with http://127.0.0.1:7860/ or 127.0.0.1:7860 or https://ff893b401dffdffdfd.gradio.live
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 7.0
[Parameters] Seed = 1417267429763820378
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 20
[Fooocus] Initializing ...
[Fooocus] Loading models ...
model_type: EPS
UNet ADM Dimension: 0
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra: {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Refiner model loaded: /content/Fooocus/models/checkpoints/DreamShaper_8_pruned.safetensors
model_type: EPS
UNet ADM Dimension: 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra: {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: /content/Fooocus/models/checkpoints/BluePencilXL_v050.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.5], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/BluePencilXL_v050.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/Fooocus/models/checkpoints/BluePencilXL_v050.safetensors] with 788 keys at weight 0.5.
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.5], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/DreamShaper_8_pruned.safetensors].
Requested to load SDXLClipModel
Loading 1 new model
^C

Version

Fooocus 2.1.859

Where are you running Fooocus?

Cloud (other)

Operating System

No response

What browsers are you seeing the problem on?

Chrome

mashb1t commented 10 months ago

This should also now work again, fixed. Please test and provide your feedback. Thanks!

EDIT: Reason was an incompatibility where i forgot to copy over a line from Fooocus main.