xhinker / sd_embed

Generate long weighted prompt embeddings for Stable Diffusion
Apache License 2.0
51 stars 6 forks source link

SD3, use_t5_encoder=False, embeddings size missmatch #1

Closed Daniel-SicSo-Edinburgh closed 1 month ago

Daniel-SicSo-Edinburgh commented 1 month ago

Describe the bug

get_weighted_text_embeddings_sd3(model, prompt=prompt, neg_prompt=neg_prompt, use_t5_encoder=False) gives as output for prompt_embeds and negative_prompt_embeds a tensor of shape [1, X, 2048] when it should give [1, X, 4096]. I assume the difference will be just padding, but it needs to have the latter format to work with diffusers.

Reproduction

from sd_embed.embedding_funcs import get_weighted_text_embeddings_sd3
from diffusers import StableDiffusion3Pipeline

def use_report(prompt: str, neg_prompt: str, file_path: str):  # file_path for sd3_medium_incl_clips.safetensors
    pipe = StableDiffusion3Pipeline.from_single_file(file_path, torch_dtype=torch.float16,
                                                     text_encoder_3=None).to("cuda")

    (
        prompt_embeds
        , prompt_neg_embeds
        , pooled_prompt_embeds
        , negative_pooled_prompt_embeds
    ) = get_weighted_text_embeddings_sd3(
        pipe
        , prompt=prompt
        , neg_prompt=neg_prompt
        , use_t5_encoder=False
    )

    image = pipe(
        prompt_embeds=prompt_embeds
        , negative_prompt_embeds=prompt_neg_embeds
        , pooled_prompt_embeds=pooled_prompt_embeds
        , negative_pooled_prompt_embeds=negative_pooled_prompt_embeds
        , num_inference_steps=30
        , height=1024
        , width=1024 + 512
        , guidance_scale=4.0
        , generator=torch.Generator("cuda").manual_seed(2)
    ).images[0]
    display(image)

Logs

    image = pipe(
  File "/home/sicso/.virtualenvs/TensorRTConversion/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/sicso/.virtualenvs/TensorRTConversion/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py", line 846, in __call__
    noise_pred = self.transformer(
  File "/home/sicso/.virtualenvs/TensorRTConversion/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sicso/.virtualenvs/TensorRTConversion/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sicso/.virtualenvs/TensorRTConversion/lib/python3.10/site-packages/diffusers/models/transformers/transformer_sd3.py", line 297, in forward
    encoder_hidden_states = self.context_embedder(encoder_hidden_states)
  File "/home/sicso/.virtualenvs/TensorRTConversion/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sicso/.virtualenvs/TensorRTConversion/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sicso/.virtualenvs/TensorRTConversion/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 116, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (308x2048 and 4096x1536)
xhinker commented 1 month ago

Added support to SD3 pipeline without T5, see sample here: https://github.com/xhinker/sd_embed/blob/main/samples/lpw_sd3_wo_t5.py

Update sd_embed to use it:

pip install -U git+https://github.com/xhinker/sd_embed.git@main