xinge008 / Cylinder3D

Rank 1st in the leaderboard of SemanticKITTI semantic segmentation (both single-scan and multi-scan) (Nov. 2020) (CVPR2021 Oral)
Apache License 2.0
859 stars 180 forks source link

Model inference time? #3

Closed zeng-hello-world closed 4 years ago

zeng-hello-world commented 4 years ago

Hi @xinge008

Thanks for your brailliant work. One thing I'm curious is how the memory occupacy and inference time of this Cylinder3D model with 10w points input?

Best Regards!

xinge008 commented 4 years ago

Hi, zeyu,

Thanks for your attention.

For memory occupancy, it costs about 6G memory for a point cloud in SemanticKITTI. For inference time, we do not make a statistic now. We will make a detailed comparison in the future.

Best, Xinge

zeng-hello-world commented 4 years ago

I just test torch model, and get this: average torch mdoel time: 166.30 ms.

xinge008 commented 4 years ago

Thanks for your try. We will give a comparison in the future.