hailanyi / TED

Transformation-Equivariant 3D Object Detection for Autonomous Driving
https://arxiv.org/abs/2211.11962
Apache License 2.0
139 stars 32 forks source link

Why should the training and testing logic in BackBone3D be different? #23

Closed minju-Kang closed 1 year ago

minju-Kang commented 1 year ago

Thank you for sharing your work!

In class TeVoxelBackBone8x, you used different fuction for training and testing. https://github.com/hailanyi/TED/blob/8c455aca982149f5630f48d2bc6db21b26e63dfe/pcdet/models/backbones_3d/spconv_backbone.py#L736 https://github.com/hailanyi/TED/blob/8c455aca982149f5630f48d2bc6db21b26e63dfe/pcdet/models/backbones_3d/spconv_backbone.py#L593 I wonder why do the training and testing logic have to be different.

hailanyi commented 1 year ago

The sequential processing for multiple transformed points is time-consuming. So I combined the tensors to a single tensor for parallel processing, that can slightly speed up the inference.