CarkusL / CenterPoint

Export CenterPoint PonintPillars ONNX Model For TensorRT
MIT License
202 stars 48 forks source link

Help with use of onnx model and TensorRT #1

Closed xavidzo closed 3 years ago

xavidzo commented 3 years ago

Hello @CarkusL, I haven't tested yet your code for exporting to onnx model, but congratulations. I tried to implement the same exports to onnx in the last few days until I realized that exporting PointPillars as a whole model was difficult because of the PillarsScatter backbone.

Have you tried using your onnx model in TensorRT or what's the purpose of converting the model to onnx in your case? In my attempts the "ScatterND" operation was not supported in TensorRT, that's why I gave up Do you have maybe and idea how to do the same operation without Scatter, another alternative?

I noticed in order to get the final results for training or inference, the functions here https://github.com/CarkusL/CenterPoint/blob/4f2fa6d0159841a8a09c3731ce5eb849f2fe58b2/det3d/models/detectors/point_pillars.py#L56 self.bbox_head.loss and self.bbox_head.predict should be adapted to the output of onnx because originally in PyTorch code the output of the head is a list of dictionaries, but in onnx the outputs are quiet different... are you also working on this adaption of the bbox_head functions to onnx output for further post-processing?

CarkusL commented 3 years ago

Thank you for your quesion! Yes, TensorRT doesn't support ScatterND. I think you maybe need to implement the ScatterND in TensorRT by using cuda. see https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#extending

now, I don't add the bbox_head for onnx because there are other ops which doesn't support in TensorRT. I will try to add the cuda implementation for ScatterND in the next few weeks.

xavidzo commented 3 years ago

Thanks for your answer. So I guess, you did not use yet your onnx model for inference / prediction ?? I meant before, I saw you exported the bbox_head in onnx, the forward() method of bbox_head and it works in TensorRT. I tested it and it works fine, but the output of the bbox_head in onnx is not a list of dictionaries as in PyTorch. For the final prediction results, the function self.bbox_head.predict() must be adapted in PyTorch code to accept the new inputs in onnx format... have you tried this or will you try to do it soon? I was thinking to try onnxruntime gpu (https://www.onnxruntime.ai/docs/get-started/install.html) to accelerate the model with cuda directly in onnx format instead of TensorRT, maybe you could also take a look at this option...

xavidzo commented 3 years ago

Have you actually used your model with onnxruntime, can you answer please @CarkusL? I think the input dimensions for example["voxels"] is not always the same for every batch data after voxelization, so having fixed dimensions for the onnnx model input does not work with variable input shape...am I wrong?

CarkusL commented 3 years ago

I uploaded the scatterNDPlugin code for the Tensorrt, you can try to do Inference the onnx model in Tensorrt.

CarkusL commented 3 years ago

Have you actually used your model with onnxruntime, can you answer please @CarkusL? I think the input dimensions for example["voxels"] is not always the same for every batch data after voxelization, so having fixed dimensions for the onnnx model input does not work with variable input shape...am I wrong?

You can fix the input dimensions [1,10, 60000,20] or [1,10, 30000,20] for example["voxels"]. You can pad 0 for the example["voxels"] and coordinate, if you don't have 30000 or 60000 pillars.