Open YorkieDev opened 5 months ago
Please the StableSwarmUI with ComfUi is a disaster. I dont want it I want my Forge :sob:
Please the StableSwarmUI with ComfUi is a disaster. I dont want it I want my Forge 😭
sadly new models never work straight away. might be a a while till it works on forge. i don't thinkforge has even been updated for 4 months
@huchenlei could help with this on the dev branch, when he'll have time.
after trying sd3 in stablewarm it's not even worth the trouble the sd3 is so shit its unusable
It has no priority to say at least. Except for the text the model does not live up to an SDXL TURBO model. It is sooooo slow you might as well stick to SDXL and use The Gimp for your text.
https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/801 好吧,建议使用a111 webui或者fooocus,甚至comfyui
@huchenlei
Again , thank you @lllyasviel for the counless cool hours I had with forge. I am so sorry that Stable AI politics are now attacking web-ui, web-ui-forge and IMHO even civitai. Obviously they are trying to gain more control and probably even a closed community as ComfyUI Workflows site shows. It will become commercial closed I have no daubt about that.
In the mean time checking my options I decided to leave forge and start working on ComfyUI. Reason for this is the way web-ui and web-ui-forge decided to go into the future. I want to concentrate on creating. For now I created my first workflow for ComfyUI. I link it here for people who want to give it a try because the available workflows are a very unreadable mess. ComfyUI is IMHO a very bad developed concept with its nodes. Conections make it an unreadable flow unless you use linearity, this flow does that. Do read the credits added to it. Its the person that made my entry in AI great.
ive been privately testing SD3 out for a Bit with SD.Next/Forge/A1111 not comfy and you can 100% merge SD1.5 and SDXL models with SD3 Medium and the Text encoders for SD3 but you have to put SD3 on the primary slot and either SD1.5 and SDXL on secondary slot or SD1.5/XL Model on Primary and T5E/T5XXL text encoders on the secondary slot if done that way the images i get are 75% accurate to prompt with minimal Errors and there a 1-2hiccups but otherwise images come out almost perfect
Is there an existing issue for this?
What would your feature do ?
This proposed Feature request would add Stable Diffusion 3 Medium Support:
https://huggingface.co/stabilityai/stable-diffusion-3-medium
Proposed workflow
Additional information
No response