mashb1t / Fooocus

Focus even better on prompting and generating
GNU General Public License v3.0
196 stars 33 forks source link

[Bug]: There is no lcm realtime canvas painting tab #11

Closed apximax closed 7 months ago

apximax commented 7 months ago

Prerequisites

Describe the problem

Hey! Thank you for you work and this fork! The only question is that I can't find the tab for lcm realtime canvas painting. You have this featutre in a list of enhancements/fixed bugs, but you have no example with some screenshots (similar like you have for metadata feature or generating mask for inpaint).

So maybe do I need to do some extra step after installing your fork to enable this feature? Thanks in advance

Full console log output

D:\Fooocus_mashb1t_win64_2-1-864>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --listen --always-normal-vram --theme dark --preview-option fast
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py', '--listen', '--always-normal-vram', '--theme', 'dark', '--preview-option', 'fast']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.864
Total VRAM 8192 MB, total RAM 14188 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 3070 Laptop GPU : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
Refiner unloaded.
Running on local URL:  http://0.0.0.0:7865
model_type EPS
UNet ADM Dimension 2816

To create a public link, set `share=True` in `launch()`.
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: D:\Fooocus_mashb1t_win64_2-1-864\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [D:\Fooocus_mashb1t_win64_2-1-864\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [D:\Fooocus_mashb1t_win64_2-1-864\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [D:\Fooocus_mashb1t_win64_2-1-864\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.85 seconds
Started worker with PID 38792
App started successful. Use the app with http://localhost:7865/ or 0.0.0.0:7865

Version

Fooocus 2.1.864

Where are you running Fooocus?

Locally

Operating System

Windows 11

What browsers are you seeing the problem on?

Chrome

mashb1t commented 7 months ago

Thanks for the report. This is indeed correct, as branch https://github.com/mashb1t/Fooocus/tree/feature/add-lcm-realtime-canvas has never been merged to develop/main due to not reaching performance goals aka. being slow (3s per image) compared to other alternatives with direct processing and no queue inbetween Frontend and GenAI (0,2-0,4s) => branch is stale, but fully functional. Feel free to merge the branch yourself to your fork and try it, but you might be better off using StreamDiffusion. Maybe deleting the feature would be better overall... What do you think?