Tencent / HunyuanDiT

Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
https://dit.hunyuan.tencent.com/
Other
3.49k stars 299 forks source link

RuntimeError: mat1 and mat2 shapes cannot be multiplied (512x768 and 2048x8192) #61

Open jeffreyrobeson opened 6 months ago

jeffreyrobeson commented 6 months ago

Thanks for your error report and we appreciate it a lot.

Checklist 2024-05-24 16:51:43.726 | INFO | hydit.inference:predict:296 - Input (height, width) = (1024, 1024) 2024-05-24 16:51:43.726 | INFO | hydit.inference:predict:301 - Align to 16: (height, width) = (1024, 1024) 2024-05-24 16:51:43.728 | DEBUG | hydit.inference:predict:347 - prompt: 一只可爱的猫 enhanced prompt: None seed: 1 (height, width): (1024, 1024) negative_prompt: 错误的眼睛,糟糕的人脸,毁容,糟糕的艺术,变形,多余的肢体,模糊的颜色,模糊,重复,病态,残缺, batch_size: 1 guidance_scale: 6 infer_steps: 100 image_meta_size: [1024, 1024, 1024, 1024, 0, 0]

C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\transformers\models\bert\modeling_bert.py:435: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) attn_output = torch.nn.functional.scaled_dot_product_attention( 0%| | 0/100 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\gradio\queueing.py", line 527, in process_events response = await route_utils.call_process_api( File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\gradio\route_utils.py", line 270, in call_process_api output = await app.get_blocks().process_api( File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\gradio\blocks.py", line 1887, in process_api result = await self.call_function( File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\gradio\blocks.py", line 1472, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\anyio_backends_asyncio.py", line 2144, in run_sync_in_worker_thread return await future File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\anyio_backends_asyncio.py", line 851, in run result = context.run(func, args) File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\gradio\utils.py", line 808, in wrapper response = f(args, kwargs) File "E:\software\HunyuanDiT\app\hydit_app.py", line 50, in infer results = gen.predict(prompt, File "E:\software\HunyuanDiT\hydit\inference.py", line 367, in predict samples = self.pipeline( File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "E:\software\HunyuanDiT\hydit\diffusion\pipeline.py", line 770, in call noise_pred = self.unet( File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, kwargs) File "E:\software\HunyuanDiT\hydit\modules\models.py", line 296, in forward text_states_t5 = self.mlp_t5(text_states_t5.view(-1, c_t5)) File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\torch\nn\modules\container.py", line 217, in forward input = module(input) File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, **kwargs) File "C:\Users\Administrator\pinokio\bin\miniconda\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward return F.linear(input, self.weight,self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (512x768 and 2048x8192)

  1. I have searched related issues but cannot get the expected help.
  2. The bug has not been fixed in the latest version.

Describe the bug A clear and concise description of what the bug is.

Reproduction

  1. What command or script did you run?
A placeholder for the command.
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
  2. What dataset did you use?

Environment

  1. Please run python utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback If applicable, paste the error trackback here.

A placeholder for trackback.

Bug fix If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

jeffreyrobeson commented 6 months ago

更换错误的模型mt5-base,选择正确的模型mt5,问题解决。

jeffreyrobeson commented 6 months ago

腾讯混元-DiT,win10本地部署体验指南,可参看https://www.toutiao.com/article/7373284530553930259/详细介绍。