-
Hello,
I am encountering an issue when using the T2I adapter with Stable Diffusion in ComfyUI. Despite updating ComfyUI to the latest version, I still face the following error:
'NoneType' object…
-
### Describe the bug
Hello,
Using [StableDiffusionXLControlNetImg2ImgPipeline](https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_sdxl#diffusers.StableDiffusionXLControlNetImg2Img…
-
Adding a list of TODOs:
- [x] MultiControlNet
- [x] resize input image(s) to desired output size
- [x] verify correctness of pipeline structure -- are we using the right unet weights?
…
-
How about DMD2 on img2img? Other methods like Hyper-SD, SDXL-Lightning don't perform well on img2img tasks, resulting in blurry images?
-
For example: https://huggingface.co/SargeZT/t2i-adapter-sdxl-multi
Would be great to get this working especially since T2IA models are much smaller (and I assume faster) than ControlNets.
-
Hi,
T2I Adapter is of most important projects for SD in my opinion. Great work!
Are you planning to have SDXL support as well?
-
I was wondering if there are any keypose preprocessor nodes? With OpenPose, there are pre-processors that allow me to extract the stick-figure image from a photo of a person, and then apply that as co…
-
Hi, really awesome work! I have read your paper and find that in Table1, you only compare your methods with TediGAN. But as you mentioned in your related work, there are other two better training requ…
-
Thanks for your pretraining!
I want to use your T2I-canny pretrained ckpt, but the current adapter.py seems mismatch with the ckpt
>The config attributes {'name': 'canny'} were passed to T2IAdapter…
-
Thanks for your great work! I notice that in the latest version, you use controlnet as the adapter for platte control. I have several questions about it:
1. In the case of the adapter, is controlnet …