I was preparing to export a TRT model for Mask2Former using the command optimized_model = torch_tensorrt.compile(model, inputs=imgs, enabled_precisions={torch.half}), where model is a Mask2Former loaded through mmseg.
However, I encountered an error at the line valuel = value_list[0].flatten(2).transpose(1, 2).reshape(4 * 8, 32, 16, 16)*:
The error message was:
`"Failed running call_method reshape((FakeTensor(..., device='cuda:0', size=(1, 256, 256),
grad_fn=), 32, 32, 16, 16), {}):
shape '[32, 32, 16, 16]' is invalid for input of size 65536"`
The original code was *valuel = value_list[level].flatten(2).transpose(1, 2).reshape(bs num_heads, embeddims, H, W_). Even after fixing all variables with constants, During training, this can be reshaped normally**, but the above error occurs when using torch_tensorrt.compile.
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
@edition3234 what is the value you are passing? The error seems to be mismatch in the input dimension and the reshape dimension you want. Could you please provide a simple repro example?
❓ Question
I was preparing to export a TRT model for Mask2Former using the command optimized_model = torch_tensorrt.compile(model, inputs=imgs, enabled_precisions={torch.half}), where model is a Mask2Former loaded through mmseg. However, I encountered an error at the line valuel = value_list[0].flatten(2).transpose(1, 2).reshape(4 * 8, 32, 16, 16)*: The error message was: `"Failed running call_method reshape((FakeTensor(..., device='cuda:0', size=(1, 256, 256), grad_fn=), 32, 32, 16, 16), {}):
shape '[32, 32, 16, 16]' is invalid for input of size 65536"`
The original code was *valuel = value_list[level].flatten(2).transpose(1, 2).reshape(bs num_heads, embeddims, H, W_). Even after fixing all variables with constants, During training, this can be reshaped normally**, but the above error occurs when using torch_tensorrt.compile.
Environment
Additional context
The complete code is as follows: