Open Szakbal2001 opened 3 months ago
Definitely a must have!
https://huggingface.co/stabilityai/stable-diffusion-3-medium
From some initial tests it seems like SD3 can follow detailed instructions in prompts a lot closer than in SDXL, especially around adding specific colors to items in the scene (e.g. pieces of clothing). This could work well with Fooocus styles, given that there aren't many Loras available for SD3 yet.
Please find more information about SD3 support in:
@UXVirtual already hinted to some difficulties: SD3 uses a completely different pipeline (mainly encoders, sampling preparation and conditioning), so it's comparable to adding SD 1.5 support with some additional steps, which isn't done quickly and needs patching of ldm_patched first (started in mentioned PR).
As mentioned in https://github.com/lllyasviel/Fooocus/discussions/2336#discussioncomment-9754697 i'm currently working on another feature.
Feel free to support in solving the issues with the PR, your help is welcome. @lllyasviel same for you :)
SD3 works in Ruined Fooocus
Check some results https://github.com/runew0lf/RuinedFooocus/issues/145#issue-2354712594
RuniedFooocus have an option to directly use comfyUI behind the scenes, without throughout deep integration with other features. Cool, but... it`s not a Fooocus way, imho
@IPv6 yes correct, there are various overrides/optimizations/extensions for ComfyUI classes and functions Fooocus uses.
SD3 works in Ruined Fooocus
Check some results runew0lf#145 (comment)
Can confirm the observations. Not impressed with SD3. Been playing around with SD3 on StableSwarm for a while. Image quality is good and rich in details. Human anatomy is awful, a big step back imho. I'll stick to SDXL unless there are trained models that fix the shortcomings. Not speaking of the licensing issues and censorship you can read about on civitai etc.
Is there an existing issue for this?
What would your feature do?
Generating image with the new SD3 model
Proposed workflow
Work as usual
Additional information
No response