Hello everyone,
I'm working on the Hessigheim dataset, where the training set alone has 128.887.190 number of points. Will subdividing the training/validation set into smaller patches have an influence over the quality of the learned model? The paper says: To train our RandLA-Net in parallel, we sample a fixed number of points (∼ 10^5) from each point cloud as the input - does that mean the gigantic training point cloud is sampled with just 10^5 points? That is not enough by a long shot.
Hello everyone, I'm working on the Hessigheim dataset, where the training set alone has 128.887.190 number of points. Will subdividing the training/validation set into smaller patches have an influence over the quality of the learned model? The paper says: To train our RandLA-Net in parallel, we sample a fixed number of points (∼ 10^5) from each point cloud as the input - does that mean the gigantic training point cloud is sampled with just 10^5 points? That is not enough by a long shot.