SmallMunich / nutonomy_pointpillars

Convert pointpillars Pytorch Model To ONNX for TensorRT Inference
MIT License
379 stars 86 forks source link

PointPillars NuScenes Model Conversion #6

Closed hz3014 closed 4 years ago

hz3014 commented 4 years ago

Thank you for your code!

I am currently working on converting trained PointPillars NuScenes Model (SECOND lastest code) to ONNX. However, there is error when onnx_module_generate:

size mismatch for rpn.conv_cls.weight: copying a param with shape torch.Size([200, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 384, 1, 1]).
size mismatch for rpn.conv_cls.bias: copying a param with shape torch.Size([200]) from checkpoint, the shape in current model is torch.Size([2]).
size mismatch for rpn.conv_box.weight: copying a param with shape torch.Size([140, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([14, 384, 1, 1]).
size mismatch for rpn.conv_box.bias: copying a param with shape torch.Size([140]) from checkpoint, the shape in current model is torch.Size([14]).
size mismatch for rpn.conv_dir_cls.weight: copying a param with shape torch.Size([40, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([4, 384, 1, 1]).
size mismatch for rpn.conv_dir_cls.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([4]).

Any suggestions or overall guideline would be appricated!

SmallMunich commented 4 years ago

this error is your rpn input tensor is not match with your train model. if you change voxels of paramters, you need to check this network flow rpn_input tensor is how, and you modify rpn_input located in onnx_model_generate() function.