Open Zakieh opened 6 years ago
Hi Zakieh!
Were you able to get reasonable results from VLP16 data? I also want to work on VLP16 point cloud. The first step that I have question about is how to convert the las files to h5 files to be suitable for PointNet?
Hi Meghdad, Actually not, I ended up using the pointclouds from Kinect. They are providing code for converting pointcloud txt or ply files to h5 file, but not for the las I assume. I didn't work with las myself either, sorry about that.
Zakieh,
Thanks for the prompt response. Just one more question, using your own data from Kinect, did you get good results? I just want to make sure this algorithm can handle different datasets. Thank you!
Yes, I could get some reasonable result, at least for walls and floor and somehow tables, just pay attention to the orientation of your pointcloud, otherwise, the floor will be segmented as wall wrongly and vice-versa. Hope this helps.
There is another library pointwise, It worked pretty reasonably on my dataset as well.
Zakieh,
Cool, thx a lot. How do you resample your dataset? I have huge point clouds. For training data I can reduce the number of points, no problem! but I don't want to change the test data. Not sure if I have to?
Sorry for too many questions.
Hi Meghdad,
I don't know why I missed this, very sorry for that. As much as I remember, the sampling is done voxel wised. In each voxel only a certain amount of points are used (close to 1000 sth). But I think you can change that number, and use the same number both for training and testing. But any number that you use for training should be used for testing as well I assume. Hope this helps.
Thank you for providing your great work for everyone. I had a question, I did train the semantic segmentation network with stanford dataset without using color data, then I tried it on a VLP16 pointcloud which is sparser than RGBD pointclouds. But the result didn't turn out well. Do you have any experience on running your code on a sparse point cloud? Has anyone tried to do this?