charlesq34 / pointnet

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
Other
4.83k stars 1.46k forks source link

Downsampling the object point cloud uniformly. Code not found + Introducing new classes. #149

Open kavnat opened 6 years ago

kavnat commented 6 years ago

Thanks for the work on this, authors! I have been trying to create my own dataset using point clouds to train and evaluate its performance.

  1. I want to downsample object point clouds down to 256 points. However, I could not find any code in the repo that does that. The authors mentioned sampling like this:

"We uniformly sample 1024 points on mesh faces according to face area and normalize them into a unit sphere. During training we augment the point cloud on-the-fly by randomly rotating the object along the up-axis and jitter the position of each points by a Gaussian noise with zero mean and 0.02 standard deviation."

Can anyone please help me understand what exactly does it mean and if there is any code available out there that can help me sample point clouds like this? Currently, I am using numpy for downsampling and the results are not good at all.

  1. I want to introduce a "none of the above" class to the network. If the point cloud does not belong to any of the classes, it will automatically classify it as "none of the above". How should I go about it? Should I train the network for this class with random point clouds ?
seanhuang5104 commented 5 years ago

https://github.com/seanhuang5104/OFFtoH5.git here is how i do it, for your reference. What you need is to use PCL ,it provide a tool call "pcl_mesh_sampling" that can create random points on model surface using .ply files.The tool provide parameter to assign how many data points you will get. If you are using ModelNet10 dataset, hope the link can help you.

2.you need to scale the model to a value range(unit sphere) meaning you should not have models of different scales at the same time for training.

pournami123 commented 5 years ago

@seanhuang5104 As explained above I understand the scales should be of the same range. But why such a sampling is needed for classification procedure? Why cant the same process that is used in semantic segmentation to generate the hdf5 datasets

seanhuang5104 commented 5 years ago

@rejivipin I assume you mean why need pcl_mesh_sampling.If you visualize the raw data from modelnet, they only contains the points at corners(for example, a simple box will only has 8 data points) which is not enough data for the model to learn the overall shape. Points outside corners are important since it provide info regarding edges and surfaces that defined the shape as well. pcl_mesh_sampling help you extract point data from model surfaces not only corner of it.

pournami123 commented 5 years ago

@seanhuang5104 Exactly and a big thanks, You cleared my doubt. If the data we are considering is vegetation, then mesh sampling might not be a solution. Right? In such a case how will we generate the hdf5 files?

seanhuang5104 commented 5 years ago

Your data is vegetation? Not sure I get that, mind explained a bit more since maybe its domain specific.

pournami123 commented 5 years ago

yes... I am working on crops. Crops don't have a regular geometry. My query is, for such a dataset, how will be the input data file generated?