rpng / ar_table_dataset

Small-scale indoor table AR visual-inertial datasets with 6DoF groundtruth.
53 stars 9 forks source link

Ground truth files seem to mismatch with .bag /d455/rigidbody #1

Closed ArmandB closed 4 months ago

ArmandB commented 4 months ago

Hello, I think your dataset is really interesting and like how it is showing an AR use case. I'd like to run my own SLAM algorithm on it.

Was the /d455/rigidbody data post-processed in any way to create the ground truth files (potentially using the contents of groundtruth_info)? When I print it out using "ros_readbagfile" for table1, I get image

Finding the closest matching timestamp in table_01.txt I get this image

The position is very similar, but the orientation seems to be much different. Just want to double check which data and timestamps are best to use here.

If the groundtruth data has been post-processed, does the image and IMU data also need to be post-processed? For example, does the IMU data need to be bias-corrected and the IMU/camera timestamps need to be corrected as well?

Thanks for your time in advance!

goldbattle commented 4 months ago

Yes, it was post processed with the vicon2gt utility so that the trajectory is time aligned and also in the IMU sensor frame as compared to the marker frame of the mocap. You can refer to that project's tech report for details. If you are running a VIO, then the groundtruth should be directly usable. The rigid body topic should not be used.

ArmandB commented 4 months ago

Sorry for re-opening this issue (I got a little too excited). I had hid a second question in there: Have the IMU and camera data been post-processed as well?

More concretely, to get the data like it is in euroc:

  1. Do we need to subtract timeshift_cam_imu from the IMU timestamps to align them with the camera times?
  2. Do we need to correct the IMU values by subtracting accelerometer_random_walk and gyroscope_random_walk?

Sorry again for re-opening and thanks again for your time!

goldbattle commented 4 months ago

The euroc has the camera and IMU be time sync'ed so there is nothing to do there.

But if you want to find the pose, for a specific frame, then yes, you would need to use the timeshift_cam_imu to get the time in the IMU clock frame, and then you can look in the groundtruth file to get its pose. Nothing in the groundtruth, nor its generation, ever uses the camera, so this is something that needs to be done after the fact if you want to use the images.

To get corrected IMU with their biases, yes, you need to subtract the biases. This is actually something which is produce in the CSV (last few columns). You would just linearly interpolate between two times in the CSV to get it at the IMU timestep. There is no time offset here since the groundtruth should already be in the IMU clock frame.

#time(ns),px,py,pz,qw,qx,qy,qz,vx,vy,vz,bwx,bwy,bwz,bax,bay,baz
ArmandB commented 4 months ago

Thank you so much for your speedy response!

I think we're on the same page: The camera and IMU in the .bag are not currently time sync'ed, and we need to use timeshift_cam_imu to sync it offline if the VIO algorithm doesn't already take this as an input. The mocap is in the IMU clock frame already, so the best play is to change the camera frame timestamps so all three (IMU, mocap, camera) are in the IMU clock frame.

The ground truth IMU biases are in the groundtruth file. We could probably use these to de-noise the IMU, but it'd likely be "cheating" b/c the SLAM algorithm would not already have this data (even the initial value at t=0) online.

Thank you again for your help, I appreciate you!