utiasASRL / pyboreas

Devkit for the Boreas autonomous driving dataset.
BSD 3-Clause "New" or "Revised" License
90 stars 9 forks source link

Wrong poses for boreas-objects-v1 #28

Closed cattaneod closed 1 year ago

cattaneod commented 1 year ago

Some sequences of the boreas-objects-v1 dataset have wrong poses, some sequences are noisy, and some others are completely wrong. For example, in sequence 16, the car is standing at a traffic light (as seen from the first and last images of the sequence attached to this message), while the plot of the poses (also attached) shows that the car move 50 meters.

first_image last_image path

keenan-burnett commented 1 year ago

I'll take a look.

keenan-burnett commented 1 year ago

Can you tell me which sequences have issues with the pose, and the timestamps of those?

cattaneod commented 1 year ago

The images are from sequence 16, from timestamp 1598990189106857 till 1598990208603736.

As I mentioned, there are other sequences where the poses are noisy, I don't have a full list tho. To give some context, I'm trying to generate a point cloud map for every sequence, removing dynamic object, and combining lidar scans based on their poses. I noticed that some maps look ok, many looks noisy, and few looks completely wrong (as the one from sequence 16). Some images are attached. I believe there is a drift in the Kalman filter, especially when the car stops, but that's just my guess. Anyway, the poses are not reliable to be used as ground truth.

Sequence 1, ok map: ok_map

Sequence 8, nosy map: noisy_map

Sequence 16, wrong map: wrong_map

keenan-burnett commented 1 year ago

Okay, thanks. I'm working on a fix that I hope to have done by the end of today. Basically, boreas-objects-v1 is our oldest log and we collected it before we knew that we needed to collect the Applanix raw logs to do post-processing. So, this is the only sequence with online GPS/INS poses instead of post-processed poses. Locally, this looks okay in some places where there is sufficient GPS coverage. However, this is also one of the only sequences that was collected in downtown Toronto. So, in some segments of the run, we're driving through urban canyons with multipath reflections. I suspect that the GPS/INS is simply lost in these regions.

I don't have the ability to go back and post-process the GPS/INS data for boreas-objects-v1 for the reason I mentioned above. However, what I can do is replace the GPS poses with lidar odometry poses for the sequences that appear to be "noisy" or "wrong" as you have noted. You can find a link to our lidar odometry and mapping pipeline here: https://github.com/utiasASRL/vtr3

There shouldn't be any changes required on your end. My plan was to use the first pose of each sequence from boreas-objects-v1, and then replace the subsequent poses with our lidar odometry. The drift rate for our lidar odometry is about 0.5% in translation error. Since the sequences are usually quite short (10-30s), the absolute drift should be minimal. I only plan on replacing the GPS poses that are noisy / completely bad. Let me know if you think this is an acceptable fix.

I can provide these updated poses to you for testing. If you're good with them, I can update the poses stored in the S3 bucket for others to use.

cattaneod commented 1 year ago

Sounds good to me, I'm available to test the poses and provide feedback. Will the poses of other sensors (I'm interested in camera specifically) also be updated?

keenan-burnett commented 1 year ago

Yup, I'll update them all.

keenan-burnett commented 1 year ago

Apologies for the delay. Try these poses: https://drive.google.com/file/d/1cz1iA4o07z_KGy1sjAGUWoBXesRP8SSc/view?usp=sharing

keenan-burnett commented 1 year ago

@cattaneod Any feedback?

keenan-burnett commented 1 year ago

@cattaneod In the absence of any feedback, I'm going to update the poses in the public S3 bucket with my proposed fix. Feel free to open another git issue if you encounter any other problems.