Open Shershebnev opened 4 years ago
@Shershebnev yup even i faced the same issues , when i looked in the results visually there was lot of misclassification happening . The custom dataset is taken from which sensor , how many points are available for the full point cloud
The data was initially collected using UAV with some Velodyne lidar (not sure about the exact model though) but I'm using only parts of it (trees) to generate my own synthetic data. The number of points for each sample is somewhere between 150 000 and 1 000 000.
Interestingly, I've ran the test data set similar to how it is done in tester* files - for each sample cropped point cloud like during training and then repeated this until all the points for each sample has been used at least once and now it gives me pretty much the same performance as in validation. To give a few:
Visited all points
eval accuracy: 0.9447332512954453
mean IOU:0.8779103366413493
Mean IoU = 87.8%
--------------------------------
87.79 | 97.08 83.99 77.95 92.14
--------------------------------
eval accuracy: 0.9542383218268254
mean IOU:0.8979474962906226
Mean IoU = 89.8%
--------------------------------
89.79 | 98.10 85.63 83.03 92.41
--------------------------------
eval accuracy: 0.9420229702689588
mean IOU:0.8837565706539428
Mean IoU = 88.4%
--------------------------------
88.38 | 97.96 85.66 80.24 89.65
--------------------------------
eval accuracy: 0.9279492206026496
mean IOU:0.8547450967532508
Mean IoU = 85.5%
--------------------------------
85.47 | 98.23 79.75 74.89 89.03
--------------------------------
However I don't think running X crops over and over is the greatest approach :(
@Shershebnev Could you tell me how to make inference using the weights provided by the author but on my custom pointclouds?
@Shershebnev Could you tell me how to make inference using the weights provided by the author but on my custom pointclouds?
It's been quite some time ago, but I believe I ended up with approach I've described above, i.e. similar to how it is done here https://github.com/QingyongHu/RandLA-Net/blob/master/tester_SemanticKITTI.py - run crops until all points has been covered and then draw class of each point based on that
Hi,
I'm training the model on my custom dataset which has format similar to SemanticKITTI. I get good results during training on validation set:
Also if I run the model on the test set with grid subsampling and
crop_pc
I get pretty much the same performance:However if I try to run the same model on the full point cloud without grid subsampling I get significantly worse results:
I was under impression that the model is agnostic to the number of points in the point cloud. Any suggestions on what could be going wrong here?