Open abhigoku10 opened 3 years ago
Hi @abhigoku10, Good questions.
label_method
in the config files to "yaw", which should do the trick. Note that you need to choose a downsample density and graph radius that work for all classes.As for bus, truck and others, Kitti does not contains enough samples. Nuscenes and Waymo Open are more suitable.
The code does not put limits on the range. But this algorithm is based on lidar only. So if it can not get any object geometries due to the distance/sparsity, I don't think it can recognize well. Temporal aggregation or fusion can be good options.
People had some successful on custom data: https://github.com/WeijingShi/Point-GNN/issues/64#issuecomment-762714959
If you can not covert your format to Kitti, you need to create you own custom_dataset.py
. As long as it has a method to get_points
and get_label
, the rest pipeline should be the same.
Sure, it would be very interesting to see the results.
The graph creation and nms are both on CPU with python. Moving them to GPU and C++ should help. For the GNN inference part, we need find a way to prune edges and vertices without hurting the accuracy.
We had this discussion before https://github.com/WeijingShi/Point-GNN/issues/24#issuecomment-650635419. To run segmentation, we need to remove the localization headers and only use the classification header. To keep the computational cost low, we need a FPN/Unet structure.
Unfortunately, I don't have plan to work this in the short term. I am happy to support if you want to work on the tests. I think the challenges are computational cost for their dense point cloud.
Thanks,
- Note that you need to choose a downsample density and graph radius that works for all classes. Could you please explain to me how to choose these parameters for the Kitti dataset and also for other custom dataset for multiclass Thanks
@WeijingShi THanks for sharing the code base and its really nice work , we can use this work as baseline . I have few queries
Training
As mentioned in the paper the inference time of a point cloud : Reading the dataset and running the calibration takes 11.0% time (70ms). Creating the graph representation consumes 18.9% time (121ms). The inference of the GNN takes 56.4% time (363ms). Boxmerging and scoring take 13.1% time (84ms) so what steps have to been taken to optimize the inference for real time latency
Futurescope: 1.Can we convert this PointGNN ot perform semanttic segmentation on point cloud data
Thanks in advance