Closed zymox closed 9 months ago
Figured it out.
I was using a different CLIP Vision model, changed it to the one mentioned in the install instructions (https://huggingface.co/lambdalabs/sd-image-variations-diffusers/tree/main/image_encoder) and it works.
Figured it out.
I was using a different CLIP Vision model, changed it to the one mentioned in the install instructions (https://huggingface.co/lambdalabs/sd-image-variations-diffusers/tree/main/image_encoder) and it works.
I also faced the same issue, but it doesn't solve my issue. Where should I put it into?
Figured it out.
I was using a different CLIP Vision model, changed it to the one mentioned in the install instructions (https://huggingface.co/lambdalabs/sd-image-variations-diffusers/tree/main/image_encoder) and it works.
I am facing the same error "mat1 and mat2 shapes cannot be multiplied (2x1024 and 768x320)". I followed your instruction & made sure the correct model was downloaded but still receive the same error message.
Anyone else able to resolve this?
Figured it out.
I was using a different CLIP Vision model, changed it to the one mentioned in the install instructions (https://huggingface.co/lambdalabs/sd-image-variations-diffusers/tree/main/image_encoder) and it works.
Noob here. Where do we even put this?
Because you're using SDXL's checkpoint model, but,controlnet uses SD1.5's
Trying to run the example workflow (with the provided example video + image), I get an error with the sampler:
No idea how to approach this ... any help?