Hello!
I would like to know the operation of reading point clouds in the demo, why do I set the maximum number of points 50000, when I change this parameter, the program becomes very time consuming, I would like to know the purpose and reason for doing this.
def _load_pcs(self, pc_dir):
max_pts_per_pc = 50000
pc_fns = glob(f'{pc_dir}/.p')
if len(pc_fns)<1: pc_fns = glob(f'{pc_dir}/*.npy')
pc_fns.sort()
pcds, frames = [],[]
for pc_fn in pc_fns:
frame = str.split(pc_fn,'/')[-1]
frame = str.split(frame,'.')[-2]
frames.append(frame)
pcd = self._load_pc(pc_fn)
pcd = np.random.permutation(pcd)[0:max_pts_per_pc]
pcds.append(pcd)
return pcds, frames
In addition: each point cloud in your own dataset usually contains 20 million points, which is a very large amount of data. Are there any settings that can improve the efficiency of running the program?
Hello! I would like to know the operation of reading point clouds in the demo, why do I set the maximum number of points 50000, when I change this parameter, the program becomes very time consuming, I would like to know the purpose and reason for doing this. def _load_pcs(self, pc_dir): max_pts_per_pc = 50000 pc_fns = glob(f'{pc_dir}/.p') if len(pc_fns)<1: pc_fns = glob(f'{pc_dir}/*.npy') pc_fns.sort() pcds, frames = [],[] for pc_fn in pc_fns: frame = str.split(pc_fn,'/')[-1] frame = str.split(frame,'.')[-2] frames.append(frame) pcd = self._load_pc(pc_fn) pcd = np.random.permutation(pcd)[0:max_pts_per_pc] pcds.append(pcd) return pcds, frames