It seems that each batch corresponding to a single object with one label.
Question
I mainly have 4 questions
Why do we always use fixed size number of points? I don't think it is necessary. I am going to develop a middleware layer to resize the each input with feature extractor.
Always top 1024: It is not like a sampling process. Why?
Why 1024 points for each batch? it is too small. Suppose we use 16 beam velodney lidar. I am going to say each pcd file (lidar point X, Y, Z file , like PLY), will typically contain 1M points. 1024 is too small for an object.
I drawed transform layer and feature layer and find that they are just about 2d convolution and transformation. How does it work?
Topic
This issue is related to #42 . And I am curious about the line used in train.py
It seems that each batch corresponding to a single object with one label.
Question
I mainly have 4 questions
Why do we always use fixed size number of points? I don't think it is necessary. I am going to develop a middleware layer to resize the each input with feature extractor.
Always top 1024: It is not like a sampling process. Why?
Why 1024 points for each batch? it is too small. Suppose we use 16 beam velodney lidar. I am going to say each pcd file (lidar point X, Y, Z file , like PLY), will typically contain 1M points. 1024 is too small for an object.
I drawed transform layer and feature layer and find that they are just about 2d convolution and transformation. How does it work?