Closed Yeager-101 closed 1 year ago
Thanks for the questions.
pSLAM->TrackStereo(imLeft, imRight, tImLeft, vImuMeas)
returns the camera pose from the Tracking thread of ORB-SLAM3. However, this camera pose can be further refined by other threads within ORB-SLAM3 (Mapping, Loop Closure etc.) As such, the pSLAM->GetCamTwc()
will give you the latest optimized result, whereas pSLAM->TrackStereo()
will be the first result in the SLAM pipeline.mCurrentFrame.GetPose()
returns Tcw), whereas for ROS we want the camera frame in world (i.e., we need Twc, or the inverse of Tcw). This distinction is due to the way visual odometry works (the first camera frame is the origin of the map, everything is modeled from the camera's perspective). BAD LOOP
error is something I also don't know much about. You can find more info about it here https://github.com/UZ-SLAMLab/ORB_SLAM3/issues/394 but basically, there are some "magic numbers" used in ORB-SLAM3 with regards to loop detection and I have no idea how to interpret them either.Hope this helps.
It's a great honor to get your answers to my questions again, while I'm calibrating my camera and IMU while waiting for your reply (I find that every time I'm waiting for your reply I'm always able to try ahead of time with suggestions you haven't given ,lol), in addition, I also noticed a problem, I saw a sentence somewhere "When actually using the BOW model, this model is best generated in a similar environment", I think this may also be the reason why the actual verification effect of this algorithm is not as good as that obtained by using the public data set test. Of course, this is just my guess, but I think this BOW is best to pass a large number of tests according to your actual application scenarios. Similar data training is obtained, so as to get the best effect.
I have the following doubts in the process of reading your source code:
pSLAM->TrackStereo(imLeft, imRight, tImLeft, vImuMeas)
The return value of this function is published as location information.pSLAM->GetCamTwc()
, I noticed that your return value is the inverse matrix return(mCurrentFrame.GetPose()).inverse()
of the pose of the current frame. I want to know why the inversion is in Is there any special need in the follow-up processing?