charlesq34 / pointnet

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
Other
4.66k stars 1.44k forks source link

[Documentation needed] Data sampling and restriction #168

Closed yiakwy closed 5 years ago

yiakwy commented 5 years ago

Topic

This issue is related to #42 . And I am curious about the line used in train.py

# train_one_epoch, line 186
current_data, current_label = provider.loadDataFile(fn_name)
current_data = current_data[:, 0:NUM_POINT,:]
...

It seems that each batch corresponding to a single object with one label.

Question

I mainly have 4 questions

  1. Why do we always use fixed size number of points? I don't think it is necessary. I am going to develop a middleware layer to resize the each input with feature extractor.

  2. Always top 1024: It is not like a sampling process. Why?

  3. Why 1024 points for each batch? it is too small. Suppose we use 16 beam velodney lidar. I am going to say each pcd file (lidar point X, Y, Z file , like PLY), will typically contain 1M points. 1024 is too small for an object.

  4. I drawed transform layer and feature layer and find that they are just about 2d convolution and transformation. How does it work?

yiakwy commented 5 years ago

I resolve the problem by developing various visualization toolkits. See https://github.com/charlesq34/pointnet/issues/116

This issue can be closed now.