Closed abhigoku10 closed 3 years ago
@abhigoku10 thanks for your interest in our work.
The inference time of the 3-layer model in GTX1070 is 643ms per frame. For faster inference, you can modify the config file and replace the content in "runtime_graph_gen_kwargs" with the same as in "graph_gen_kwargs". This boosts the speed significantly and reduces the memory requirement a lot with little accuracy loss.
No, I did not re-trained the model. It's the same model tested by different numbers of scanning lines.
The method is doing well in point classification. And other works on segmentation using GNN/GCN also show promising results. However, the computational cost is a challenging problem. In the current implementation, we have to downsample the point cloud before we construct the graph. Otherwise, the graph will be too large to fit in GPU, and also the running time will be too slow. To get the classification of each input point, maybe a "pyramid" structure is necessary to reduce computational cost.
Thanks,
@WeijingShi thanks for the response
Q1. thanks for teh pointers will check this out , any idea how much time will get reduced Q2. will try to re-train on the reduced scanning line and check its accuracy on the increased scanning line Q3 yup absolutely right , so any thoughts how to make point gnn to run at real inference time
@abhigoku10 Thanks.
@abhigoku10 thanks for your interest in our work.
- The inference time of the 3-layer model in GTX1070 is 643ms per frame.
For faster inference, you can modify the config file and replace the content in "runtime_graph_gen_kwargs" with the same as in "graph_gen_kwargs". This boosts the speed significantly and reduces the memory requirement a lot with little accuracy loss.
No, I did not re-trained the model. It's the same model tested by different numbers of scanning lines.
The method is doing well in point classification. And other works on segmentation using GNN/GCN also show promising results. However, the computational cost is a challenging problem. In the current implementation, we have to downsample the point cloud before we construct the graph. Otherwise, the graph will be too large to fit in GPU, and also the running time will be too slow. To get the classification of each input point, maybe a "pyramid" structure is necessary to reduce computational cost.
Thanks,
may I know whether this 643ms is 64 scanning line (without downsample) or not? Can it run at best performance,within 700ms? thanks
Hi @EnzeChen1996, Yes, the 643 ms is on 64 scanning line data using the complete T3 model.
@WeijingShi thanks for open sourcing the code base i have few queries
Thanks for your response in advance