@WangYueFt @AnTao97
I am using DGCNN for semantic segmentation task for lidar pointcloud. Base on my result I found that the if I only take XYZ as input, the result is significantly better than XYZ+intensity(intensity is a scalar, kinda feature of ray reflection). I want to discuss why 3D is better than 4D for dgcnn? Here are my thoughts and I hope you could give me some advice.
1.For knn, I pass only normalized XYZ to compute the distance. Should I pass XYZ + intensity to compute KNN?
For K value I use the default of the code which is 20, should I adjust the K value?
Any other suggestions will be very appreciated.
Notes that I believe the intensity of Lidar is not dummy variable. Based on my observation, I also tried other model such as pointnet, pointnet++ etc. All the model have better performance if I pass 4D input.
@WangYueFt @AnTao97 I am using DGCNN for semantic segmentation task for lidar pointcloud. Base on my result I found that the if I only take XYZ as input, the result is significantly better than XYZ+intensity(intensity is a scalar, kinda feature of ray reflection). I want to discuss why 3D is better than 4D for dgcnn? Here are my thoughts and I hope you could give me some advice.
1.For knn, I pass only normalized XYZ to compute the distance. Should I pass XYZ + intensity to compute KNN?
Notes that I believe the intensity of Lidar is not dummy variable. Based on my observation, I also tried other model such as pointnet, pointnet++ etc. All the model have better performance if I pass 4D input.