Open ToABetterDay opened 1 year ago
These are calibration parameters (gravity and camera intrinsics) associated with the images of the KITTI dataset. To fine-tune with your own images, you need the ground truth 3-DoF pose, gravity direction, and camera intrinsics of each image. You may infer infer the gravity and intrinsics with a deep network, like PerspectiveFields as we do in the demo, but you still need the GT 3-DoF poses.
If there is interest, I could provide a simplified dataloader to make it easier to fine-tune with custom data - but this won't happen before September.
Thank you for your illustration!
I also have a question about the KITTI dataset. In train_files.txt (and test files), the contents are like: 2011_09_26/2011_09_26_drive_0009_sync/0000000195.png -0.5595767 0.3615212 -0.92674136 what are the three numbers? Are they prior mentioned in the paper? It seems that they are not used in training or evaluation.
These are initial errors in (x, y, angle) defined by Shi et al.. We use them only for test and validation to ensure consistent results. At training time, they are randomized for each example.
(x, y) error: https://github.com/facebookresearch/OrienterNet/blob/213aff45ce49a6aea11d273d198d9c2969457e10/maploc/data/dataset.py#L105-L108
Thank you so much! Yes they are shifts. May I ask what is the corresponding code of the 'coarse location prior ξprior' in paper? Since I didn`t see it from data prepare and setup part.
At training time we randomly sample an offset from the ground truth. For evaluation, depending on the datasets, we proceed similarly or, if available, initialize the GPS.
Hi, thanks for your sharing! I`m trying to fine tune the model, I want to ask how to produce the parameters in calib_cam_to_cam.txt, calib_imu_to_velo.txt, calib_velo_to_cam.txt. Will it make the fine tune ineffective if I use these parameters of KITTI dataset provided directly?