I am using DGCNN to classify LiDAR pointClouds. I have trained the model using ModelNet40 train data(2048 XYZ points, 250 epochs) and results are good when I try to classify objects using ModelNet40 test data.
But when I try to classify real data collected by velodyne sensor the prediction is mostly wrong. Please find the attached example. Most of the times I get output as Plant, Guitar or Stairs. I have shifted my objects to center of the coordinate frame and have normalized the values[-1,1]. I have even tried to clean the boundaries.
Can somebody suggest me what I could be doing wrong?
I found that I have to feed the point cloud in the right orientation or else result can be bad. The network is invariant to geometric transform like rotation only up to an extent
Dear all,
I am using DGCNN to classify LiDAR pointClouds. I have trained the model using ModelNet40 train data(2048 XYZ points, 250 epochs) and results are good when I try to classify objects using ModelNet40 test data.
But when I try to classify real data collected by velodyne sensor the prediction is mostly wrong. Please find the attached example. Most of the times I get output as Plant, Guitar or Stairs. I have shifted my objects to center of the coordinate frame and have normalized the values[-1,1]. I have even tried to clean the boundaries.
Can somebody suggest me what I could be doing wrong?