Open Raven19888 opened 3 months ago
Hi. I meet the same problem when converting resnet18_baseline_att_224x224_A_epoch_249.pth to a trt model. The error is "[TRT] [E] Error Code: 3: 1.cmap_up.0:0:DECONVOLUTION:GPU: kernel weights has count 2097152 but 4194304 was expected". Have you fixed it?
I ran into the following problem when converting
densenet121_baseline_att_256x256_B_epoch_160.pth
and alsoresnet18_baseline_att_224x224_A_epoch_249.pth
into a trt model, which appears to be due to a mismatch between the model weight and the model definition. Does anyone know how to fix it?>>> model_trt = torch2trt.torch2trt(model, [data], fp16_mode=True, max_workspace_size=1<<25) [08/01/2024-17:37:36] [TRT] [E] Error Code: 3: 1.cmap_up.0:0:DECONVOLUTION:GPU:kernel weights has count 2097152 but 4194304 was expected [08/01/2024-17:37:36] [TRT] [E] ITensor::getDimensions: Error Code 4: API Usage Error (1.cmap_up.0:0:DECONVOLUTION:GPU: count of 2097152 weights in kernel, but kernel dimensions (4,4) with 512 input channels, 512 output channels and 1 groups were specified. Expected Weights count is 512 * 4*4 * 512 / 1 = 4194304) [08/01/2024-17:37:36] [TRT] [E] Error Code: 3: 1.paf_up.0:0:DECONVOLUTION:GPU:kernel weights has count 2097152 but 4194304 was expected [08/01/2024-17:37:36] [TRT] [E] ITensor::getDimensions: Error Code 4: API Usage Error (1.paf_up.0:0:DECONVOLUTION:GPU: count of 2097152 weights in kernel, but kernel dimensions (4,4) with 512 input channels, 512 output channels and 1 groups were specified. Expected Weights count is 512 * 4*4 * 512 / 1 = 4194304) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/miniconda3/envs/trtpose/lib/python3.8/site-packages/torch2trt-0.5.0-py3.8-linux-x86_64.egg/torch2trt/torch2trt.py", line 643, in torch2trt outputs = module(*inputs) File "/root/miniconda3/envs/trtpose/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl result = forward_call(*args, **kwargs) File "/root/miniconda3/envs/trtpose/lib/python3.8/site-packages/torch/nn/modules/container.py", line 217, in forward input = module(input) File "/root/miniconda3/envs/trtpose/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl result = forward_call(*args, **kwargs) File "/root/miniconda3/envs/trtpose/lib/python3.8/site-packages/trt_pose-0.0.1-py3.8-linux-x86_64.egg/trt_pose/models/common.py", line 76, in forward return self.cmap_conv(xc * ac), self.paf_conv(xp * ap) File "/root/miniconda3/envs/trtpose/lib/python3.8/site-packages/torch2trt-0.5.0-py3.8-linux-x86_64.egg/torch2trt/torch2trt.py", line 262, in wrapper converter["converter"](ctx) File "/root/miniconda3/envs/trtpose/lib/python3.8/site-packages/torch2trt-0.5.0-py3.8-linux-x86_64.egg/torch2trt/converters/native_converters.py", line 1496, in convert_mul input_a_trt, input_b_trt = broadcast_trt_tensors(ctx.network, [input_a_trt, input_b_trt], len(output.shape)) File "/root/miniconda3/envs/trtpose/lib/python3.8/site-packages/torch2trt-0.5.0-py3.8-linux-x86_64.egg/torch2trt/torch2trt.py", line 146, in broadcast_trt_tensors if len(t.shape) < broadcast_ndim: ValueError: __len__() should return >= 0