Closed madsteed closed 6 months ago
did you follow the installation instructions for Fooocus on MacOS (still Beta)? I also have a M1 Pro Macbook bot no issues, running in anaconda env.
Do not understand. Other presets are OK, Lightning and LCM images are black. Is it related to the installation problem?
@madsteed not necessarily, just making sure the installation is fine. Are the preview images fine while rendering and is only the final image black? This would hint at VAE processing issues on your side, which might be prevented by using --vae-in-fp16 or similar.
So I have exactly the same issue on an M3 Pro MacBook Fooocus installed with anaconda, using the installation instructions. Preview images are also not fine.
Everything else works as expected.
{
"prompt": "steam train",
"negative_prompt": "",
"prompt_expansion": "steam train, detailed, elegant, highly color saturated, intricate, full perfect, sharp focus, beautiful, epic composition, cinematic, amazing light, dynamic background, advanced, atmosphere, lively, magical, very inspirational, professional, decorated, stunning, inspired, creative, positive, trendy, cute, adorable, pretty, pure, coherent, fine detail, polished, complex, enhanced",
"styles": "['Fooocus V2', 'Fooocus Enhance', 'Fooocus Sharp']",
"performance": "Lightning",
"resolution": "(1152, 896)",
"guidance_scale": 1.0,
"sharpness": 0.0,
"adm_guidance": "(1.0, 1.0, 0.0)",
"base_model": "juggernautXL_v8Rundiffusion.safetensors",
"refiner_model": "None",
"refiner_switch": 1.0,
"adaptive_cfg": 1.0,
"sampler": "euler",
"scheduler": "sgm_uniform",
"seed": "5000134209646226646",
"lora_combined_6": "sdxl_lightning_4step_lora.safetensors : 1.0",
"metadata_scheme": false,
"version": "Fooocus v2.3.0"
}
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py']
Python 3.10.14 (main, Mar 21 2024, 11:21:31) [Clang 14.0.6 ]
Fooocus version: 2.3.0
[Cleanup] Attempting to delete content of temp dir /var/folders/m4/w360gldx1xx7nb2v895_533w0000gn/T/fooocus
[Cleanup] Cleanup successful
Total VRAM 36864 MB, total RAM 36864 MB
Set vram state to: SHARED
Always offload VRAM
Device: mps
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded.
Running on local URL: http://127.0.0.1:7865
To create a public link, set `share=True` in `launch()`.
model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: /Users/Shared/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/Users/Shared/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/Users/Shared/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/Users/Shared/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
Started worker with PID 37397
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
Loaded preset: /Users/Shared/Fooocus/presets/lightning.json
Enter Lightning mode.
[Fooocus] Downloading Lightning components ...
[Parameters] Adaptive CFG = 1.0
[Parameters] Sharpness = 0.0
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.0 : 1.0 : 0.0
[Parameters] CFG = 1.0
[Parameters] Seed = 5000134209646226646
[Parameters] Sampler = euler - sgm_uniform
[Parameters] Steps = 4 - 4
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
Request to load LoRAs [['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0], ('sdxl_lightning_4step_lora.safetensors', 1.0)] for model [/Users/Shared/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/Users/Shared/Fooocus/models/loras/sdxl_lightning_4step_lora.safetensors] for UNet [/Users/Shared/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 1.0.
Requested to load SDXLClipModel
Loading 1 new model
unload clone 1
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] steam train, detailed, elegant, highly color saturated, intricate, full perfect, sharp focus, beautiful, epic composition, cinematic, amazing light, dynamic background, advanced, atmosphere, lively, magical, very inspirational, professional, decorated, stunning, inspired, creative, positive, trendy, cute, adorable, pretty, pure, coherent, fine detail, polished, complex, enhanced
[Fooocus] Encoding positive #1 ...
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (896, 1152)
Preparation time: 2.18 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.6951494812965393, sigma_max = 14.614640235900879
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 9.85 seconds
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
100%|█████████████████████████████████████████████| 4/4 [00:11<00:00, 2.99s/it]
Requested to load AutoencoderKL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 1.13 seconds
Image generated with private log at: /Users/Shared/Fooocus/outputs/2024-03-23/log.html
Generating and saving time: 25.87 seconds
Total time: 28.08 seconds
Request to load LoRAs [['sdxl_lightning_4step_lora.safetensors', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0], ('sdxl_lightning_4step_lora.safetensors', 1.0)] for model [/Users/xx/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
@madsteed Just double checked, please don't select the Lightning LoRA, it will be applied automatically. It seems like you've selected the LoRA manually and used Lightning performance. => only use Lightning performance, no need to add the LoRA a second time.
Sorry, from my side I still have trouble to use the lcm or lightning settings. Would you please mind sharing metadata/parameter set that works on your side, so that I can try them ? Thanks a lot.
Checklist
What happened?
Preset lightning or lcm, parameter default, the resulting images are all black.
Steps to reproduce the problem
no
What should have happened?
no
What browsers do you use to access Fooocus?
Google Chrome
Where are you running Fooocus?
None
What operating system are you using?
macos m1 pro
Console logs
Additional information
No response