Closed longmalongma closed 3 years ago
We provide just the labels for the points and use the original point clouds provided by the KITTI odometry dataset. For our baselines and RangeNet++ (see https://github.com/PRBonn/lidar-bonnetal), we determine the mean and standard deviation of all points of the training set and compute (x-mean)/std. You can also take our values from the config files of RangeNet:
img_means: #range,x,y,z,signal
- 12.12
- 10.88
- 0.23
- -1.04
- 0.21
img_stds: #range,x,y,z,signal
- 12.32
- 11.47
- 6.91
- 0.86
- 0.16
The actual scans to determine these values are somehow lost and we just used these valued. If you want to have all points in [-1,1] you can also just normalize by 1/50, since we labeled only points up to 50 meters. Important is only that you use our values to normalize the range images if you want to use our pre-trained models from lidar-bonnetal.
There is not much activity on this issue and I think it seems to be resolved. Therefore, I will now close this. If you still have doubts, please comment here or re-open the issue.
Have the values of the three-dimensional coordinates x, y, z on the Semantickitti dataset be normalized to [-1,1]? If not, how to normalize a tensor?