Closed cattaneod closed 1 year ago
I'll take a look.
Can you tell me which sequences have issues with the pose, and the timestamps of those?
The images are from sequence 16, from timestamp 1598990189106857
till 1598990208603736
.
As I mentioned, there are other sequences where the poses are noisy, I don't have a full list tho. To give some context, I'm trying to generate a point cloud map for every sequence, removing dynamic object, and combining lidar scans based on their poses. I noticed that some maps look ok, many looks noisy, and few looks completely wrong (as the one from sequence 16). Some images are attached. I believe there is a drift in the Kalman filter, especially when the car stops, but that's just my guess. Anyway, the poses are not reliable to be used as ground truth.
Sequence 1, ok map:
Sequence 8, nosy map:
Sequence 16, wrong map:
Okay, thanks. I'm working on a fix that I hope to have done by the end of today.
Basically, boreas-objects-v1
is our oldest log and we collected it before we knew that we needed to collect the Applanix raw logs to do post-processing. So, this is the only sequence with online GPS/INS poses instead of post-processed poses. Locally, this looks okay in some places where there is sufficient GPS coverage. However, this is also one of the only sequences that was collected in downtown Toronto. So, in some segments of the run, we're driving through urban canyons with multipath reflections. I suspect that the GPS/INS is simply lost in these regions.
I don't have the ability to go back and post-process the GPS/INS data for boreas-objects-v1
for the reason I mentioned above. However, what I can do is replace the GPS poses with lidar odometry poses for the sequences that appear to be "noisy" or "wrong" as you have noted. You can find a link to our lidar odometry and mapping pipeline here: https://github.com/utiasASRL/vtr3
There shouldn't be any changes required on your end. My plan was to use the first pose of each sequence from boreas-objects-v1
, and then replace the subsequent poses with our lidar odometry. The drift rate for our lidar odometry is about 0.5% in translation error. Since the sequences are usually quite short (10-30s), the absolute drift should be minimal. I only plan on replacing the GPS poses that are noisy / completely bad. Let me know if you think this is an acceptable fix.
I can provide these updated poses to you for testing. If you're good with them, I can update the poses stored in the S3 bucket for others to use.
Sounds good to me, I'm available to test the poses and provide feedback. Will the poses of other sensors (I'm interested in camera specifically) also be updated?
Yup, I'll update them all.
Apologies for the delay. Try these poses: https://drive.google.com/file/d/1cz1iA4o07z_KGy1sjAGUWoBXesRP8SSc/view?usp=sharing
@cattaneod Any feedback?
@cattaneod In the absence of any feedback, I'm going to update the poses in the public S3 bucket with my proposed fix. Feel free to open another git issue if you encounter any other problems.
Some sequences of the boreas-objects-v1 dataset have wrong poses, some sequences are noisy, and some others are completely wrong. For example, in sequence 16, the car is standing at a traffic light (as seen from the first and last images of the sequence attached to this message), while the plot of the poses (also attached) shows that the car move 50 meters.