Closed zeng-hello-world closed 4 years ago
Hi, zeyu,
Thanks for your attention.
For memory occupancy, it costs about 6G memory for a point cloud in SemanticKITTI. For inference time, we do not make a statistic now. We will make a detailed comparison in the future.
Best, Xinge
I just test torch model, and get this: average torch mdoel time: 166.30 ms
.
Thanks for your try. We will give a comparison in the future.
Hi @xinge008
Thanks for your brailliant work. One thing I'm curious is how the memory occupacy and inference time of this Cylinder3D model with 10w points input?
Best Regards!