Open littlerookie opened 4 months ago
When I use one images, it's ok. But use two images, raise a error:
File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 2515, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) │ │ │ │ │ │ │ │ │ │ └ <torch.backends.ContextProp object at 0x7f65702c4490> │ │ │ │ │ │ │ │ │ └ <module 'torch.backends.cudnn' from '/opt/conda/lib/python3.8/site-packages/torch/backends/cudnn/__init__.py'> │ │ │ │ │ │ │ │ └ <module 'torch.backends' from '/opt/conda/lib/python3.8/site-packages/torch/backends/__init__.py'> │ │ │ │ │ │ │ └ <module 'torch' from '/opt/conda/lib/python3.8/site-packages/torch/__init__.py'> │ │ │ │ │ │ └ 1e-05 │ │ │ │ │ └ Parameter containing: │ │ │ │ │ tensor([0.0008, 0.0005, 0.0029, ..., 0.0034, 0.0008, 0.0036], device='cuda:0', │ │ │ │ │ requires_grad=True) │ │ │ │ └ Parameter containing: │ │ │ │ tensor([1.0168, 1.0066, 1.0123, ..., 1.0062, 1.0055, 1.0085], device='cuda:0', │ │ │ │ requires_grad=True) │ │ │ └ (12288,) │ │ └ tensor([[[0.0314, 0.0314, 0.0314, ..., 0.2196, 0.2196, 0.2196], │ │ [0.0784, 0.0784, 0.0784, ..., 0.2235, 0.2235, 0.22... │ └ <built-in method layer_norm of type object at 0x7f667e3d4260> └ <module 'torch' from '/opt/conda/lib/python3.8/site-packages/torch/__init__.py'> RuntimeError: Given normalized_shape=[12288], expected input with shape [*, 12288], but got input of size[1, 256, 24576]
How to fix it? Thanks.
When I use one images, it's ok. But use two images, raise a error:
How to fix it? Thanks.