Open subversive009 opened 1 month ago
For the KITTI Odometry dataset, we add random distortion while generating the GT pose link.
The bug has been fixed in the Nuscenes dataset link by adding a random seed.
You could use the generate_random_transform
function in the nuscenes.py
file directly, and we will fix the bug in the near future.
Although the distortion is agnostic, the evaluation results should be very close to the results reported in the paper. You can have a try.
Thank you for your reply. I still have some visualization questions. I tried using these randomly generated GT_P to process both point clouds and images for visualization, but some of them failed to display the images properly due to the large random rotation angle.Could you tell me how the visualization is done in your paper or provide the visualization code for KITTI? Normal display fail to display
link for visualization code on the Nuscenes dataset. We will upload the visualization code for the KITTI Odometry dataset later.
Thank you for your reply. I still have some visualization questions. I tried using these randomly generated GT_P to process both point clouds and images for visualization, but some of them failed to display the images properly due to the large random rotation angle.Could you tell me how the visualization is done in your paper or provide the visualization code for KITTI? Normal display fail to display
It seems the image was rotated and scaled, which should not happen: any distortion in the extrinsic parameters should not affect the intrinsic parameters of the camera.
Hello, I noticed that the final output P_pred is actually the inverse matrix of the random generated GT_P. Does this mean that this model actually recovers the point cloud that have been transformed to a random coordinate system to the camera coordinate system. Is the use of randomly generated external parameters as GT_P because the variation between external parameters in the dataset is too small? If I want to achieve inputting a picture and a point cloud and obtaining extrinsic parameters that are similar to those in the labeled file, will I not be able to achieve this because the data set is not rich enough?
Why do the ground truth values of p for each frame in the evaluation results differ, shouldn't the values for each sequence in the annotation file be the same? Then, could you explain the meaning of each value in the output? Thank you