Open sashasubbbb opened 8 months ago
@lllyasviel What is the best way to run 2 unets side by side now? It seems like the core logic for X-Adapter is mapping sd15 hidden states to SDXL hidden states (Add to original SDXL hidden states) in decode part of unet.
Awesome following this topic
@huchenlei Do they absolutely have to be side-by-side, or can they be loaded and unloaded one, then the other?
I ask because it doesn't want to run as-is on my 8GB of VRAM. It OOM'ed during the 2nd set of generation iterations. By the description / tutorial here by the author I was guessing it first generates using SD1.5 and then using SDXL, but I could be wrong.
I was able to make it work by changing every CUDA reference to CPU just for testing at a very slow 30~ minutes an image. Using inference.py --plugin_type "lora"
with --adapter_guidance_start_list 0.7
and an "old pencil sketch" style LoRA, I get a decent effect.
I need this... right now
I need this... right now
+1
According to my testing X-Adapter result is similar to running HR fix with SD15 model doing low-res pass and SDXL model doing highres pass.
See https://github.com/Mikubill/sd-webui-controlnet/issues/2652#issuecomment-1972292154.
There also SD-Latent-Interposer, as an alternative And the dev says a A1111 version can be implemented.
Is there an existing issue for this?
What would your feature do ?
https://github.com/showlab/X-Adapter Code for X-Adapter is finally out.
X-Adapter enables plugins pretrained on old version (e.g. SD1.5) directly work with the upgraded Model (e.g., SDXL) without further retraining.
Is it possible to implement it into Forge?
Proposed workflow
Additional information
No response