wty-ustc / HairCLIP

[CVPR 2022] HairCLIP: Design Your Hair by Text and Reference Image
GNU Lesser General Public License v2.1
541 stars 68 forks source link

用两张图片测试的时候报错 #25

Closed hello-lx closed 2 years ago

hello-lx commented 2 years ago

输入命令: E:\Linux\XSpace\papers\HairCLIP\mapper>python scripts/inference.py --exp_dir=E:\Linux\XSpace\pap ers\HairCLIP\data\exp --checkpoint_path=F:\Dataset\CelebA\Data\hairclip.pt --latents_test_path=F:\Dataset\CelebA\Data\test_faces.pt --editin g_type=color --input_type=image --hairstyle_description="hairstyle_list.txt" --color_ref_img_test_path=E:\Linux\XSpace\papers\HairCLIP\data\ ref

在 latent_mappers.py 中的 x = clip_model.encode_image(masked_generated_renormed) 报错了,错误信息如下:

*** RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/torch/multimodal/model/multimodal_transformer/___torch_mangle_9591.py", line 19, in encode_image _0 = self.visual input = torch.to(image, torch.device("cuda:0"), 5, False, False, None) return (_0).forward(input, )


  def encode_text(self: __torch__.multimodal.model.multimodal_transformer.___torch_mangle_9591.Multimodal,
    input: Tensor) -> Tensor:
  File "code/__torch__/multimodal/model/multimodal_transformer.py", line 34, in forward
    x2 = torch.add(x1, torch.to(_4, 5, False, False, None), alpha=1)
    x3 = torch.permute((_3).forward(x2, ), [1, 0, 2])
    x4 = torch.permute((_2).forward(x3, ), [1, 0, 2])
                        ~~~~~~~~~~~ <--- HERE
    _15 = torch.slice(x4, 0, 0, 9223372036854775807, 1)
    x5 = torch.slice(torch.select(_15, 1, 0), 1, 0, 9223372036854775807, 1)
  File "code/__torch__/multimodal/model/multimodal_transformer/___torch_mangle_9477.py", line 8, in forward
  def forward(self: __torch__.multimodal.model.multimodal_transformer.___torch_mangle_9477.Transformer,
    x: Tensor) -> Tensor:
    return (self.resblocks).forward(x, )
            ~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
  def forward1(self: __torch__.multimodal.model.multimodal_transformer.___torch_mangle_9477.Transformer,
    x: Tensor) -> Tensor:
  File "code/__torch__/torch/nn/modules/container/___torch_mangle_9476.py", line 29, in forward
    _8 = getattr(self, "3")
    _9 = getattr(self, "2")
    _10 = (getattr(self, "1")).forward((getattr(self, "0")).forward(x, ), )
                                        ~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
    _11 = (_7).forward((_8).forward((_9).forward(_10, ), ), )
    _12 = (_4).forward((_5).forward((_6).forward(_11, ), ), )
  File "code/__torch__/multimodal/model/multimodal_transformer/___torch_mangle_9376.py", line 13, in forward
    _0 = self.mlp
    _1 = self.ln_2
    _2 = (self.attn).forward((self.ln_1).forward(x, ), )
          ~~~~~~~~~~~~~~~~~~ <--- HERE
    x0 = torch.add(x, _2, alpha=1)
    x1 = torch.add(x0, (_0).forward((_1).forward(x0, ), ), alpha=1)
  File "code/__torch__/torch/nn/modules/activation/___torch_mangle_9369.py", line 38, in forward
    _16 = [-1, int(torch.mul(bsz, CONSTANTS.c0)), _8]
    v0 = torch.transpose(torch.view(_15, _16), 0, 1)
    attn_output_weights = torch.bmm(q2, torch.transpose(k0, 1, 2))
                          ~~~~~~~~~ <--- HERE
    input = torch.softmax(attn_output_weights, -1, None)
    attn_output_weights0 = torch.dropout(input, 0., True)

Traceback of TorchScript, original code (most recent call last):
/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py(4294): multi_head_attention_forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/activation.py(985): forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/root/workspace/multimodal-pytorch/multimodal/model/multimodal_transformer.py(45): attention
/root/workspace/multimodal-pytorch/multimodal/model/multimodal_transformer.py(48): forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py(117): forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/root/workspace/multimodal-pytorch/multimodal/model/multimodal_transformer.py(63): forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/root/workspace/multimodal-pytorch/multimodal/model/multimodal_transformer.py(93): forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/root/workspace/multimodal-pytorch/multimodal/model/multimodal_transformer.py(221): visual_forward
/opt/conda/lib/python3.7/site-packages/torch/jit/_trace.py(940): trace_module
<ipython-input-1-40b054242c5d>(36): export_torchscript_models
<ipython-input-2-808c11c4d1cf>(3): <module>
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py(3418): run_code
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py(3338): run_ast_nodes
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py(3147): run_cell_async
/opt/conda/lib/python3.7/site-packages/IPython/core/async_helpers.py(68): _pseudo_sync_runner
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py(2923): _run_cell
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py(2878): run_cell
/opt/conda/lib/python3.7/site-packages/IPython/terminal/interactiveshell.py(555): interact
/opt/conda/lib/python3.7/site-packages/IPython/terminal/interactiveshell.py(564): mainloop
/opt/conda/lib/python3.7/site-packages/IPython/terminal/ipapp.py(356): start
/opt/conda/lib/python3.7/site-packages/traitlets/config/application.py(845): launch_instance
/opt/conda/lib/python3.7/site-packages/IPython/__init__.py(126): start_ipython
/opt/conda/bin/ipython(8): <module>
RuntimeError: cublas runtime error : unknown error at C:/cb/pytorch_1000000000000/work/aten/src/THC/THCBlas.cu:225
(Pdb) img_tensor.shape
torch.Size([1, 3, 1024, 1024])

请问是输入的tensor大小不对吗
hello-lx commented 2 years ago

是环境的问题

xuzhi0413 commented 1 year ago

你好,我想问一下两张测试图片放在哪个路径下面,以及运行命令要怎么改呢?谢谢!