Running through the latest diffusers commits after encountering an error, this PR seems to be breaking LoRAManager and giving the following error:
RuntimeError: The size of tensor a (320) must match the size of tensor b (2560) at non-singleton dimension 2
Here is the full trace:
Traceback (most recent call last):
File "/app/main.py", line 228, in <module>
result = self.pipe(**kwargs)
File "/app/Pipeline.py", line 1229, in text2img
return self.__call__(
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/app/Pipeline.py", line 1105, in __call__
noise_pred = self.unet(
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 670, in forward
sample, res_samples = downsample_block(
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/diffusers/models/unet_2d_blocks.py", line 856, in forward
hidden_states = attn(
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/diffusers/models/transformer_2d.py", line 265, in forward
hidden_states = block(
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/diffusers/models/attention.py", line 312, in forward
# prepare attention mask here
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 223, in forward
return self.processor(
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 510, in __call__
query = attn.to_q(hidden_states)
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/app/python-inference/src/lib/lora/lora.py", line 55, in forward
self.linear(input)
RuntimeError: The size of tensor a (320) must match the size of tensor b (2560) at non-singleton dimension 2
Running through the latest diffusers commits after encountering an error, this PR seems to be breaking LoRAManager and giving the following error:
Here is the full trace: