Open Sundragon1993 opened 6 days ago
Cc: @DN6.
If you're interested in working on contributing this support, we'd very much appreciate it. And of course, we'd be more than happy to help.
@sayakpaul cc: @DN6 Thank you for the reply, please check this pull request: https://github.com/huggingface/diffusers/pull/8672
Is your feature request related to a problem? Please describe. The current AnimateDiffSDXLPipeline doesn't support neither 1 controlnet nor multi controlnets. I've been working on this task for several days by combining StableDiffusionXLControlNetAdapterPipeline and AnimateDiffControlNetPipeline in community folder but not success yet.
Describe the solution you'd like. The idea was poses of character from a video will be extracted then by utilizing the Pose ControlnetSDXL the AnimateDiffSDXL will be conditioned on the provided information to produce another character video.
The AnimateDiffSDXLPipeline should be callable like this:
Describe alternatives you've considered. Not yet
Additional context. There are some shape mismatch when providing inputs for controlnet: