Open antithing opened 7 years ago
Hi Antithing,
I found these links:
https://www.youtube.com/watch?v=bokmduLlBqs
References pre-init for imu here:
https://github.com/tdnet12434/ORB_SLAM2/tree/imu_preint_init_only?files=1
The whole project based on the orb slam paper is here ( https://arxiv.org/abs/1610.05949):
https://github.com/jingpang/LearnVIORB
On 2 February 2017 at 21:02, antithing notifications@github.com wrote:
I am looking at integrating an IMU into the system. I don't need it tightly integrated, i just want to pass a starting rotation, so that the initial rotation matches the IMU when the map/tracking is initialized.
If you have time could you please point me in the right direction to start this? Thank you again!
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/raulmur/ORB_SLAM2/issues/249, or mute the thread https://github.com/notifications/unsubscribe-auth/ADRfMcrmM_1PlrjeL3cB-dvvOY2W54Tqks5rYkRsgaJpZM4L1ob2 .
@antithing , at any time you can compare your IMU reading with the current frame pose, determine a correction rotation and apply it to every mappoint and keyframe. The sooner, the faster.
Initial triangulation is made from two frames - the second one is the current one. When initial triangulation succeed, the first of those frame is taken as the origin, the center of the map, with rotation zero. But at that time your actual pose correspond to the second frame (the current frame), not at the map center, nor with zero rotation.
Two places to do it:
The first is easier, you have access to both first keyframes and all mappoints before they go to the map.
The later let you apply your rotation correction whenever you want, but you'll need access to both protected set Map::mspKeyFrames and Map::mspMapPoints, and you'll want to stop local mapper before applying the correction.
@Kirosoft Thank you! I am on windows, with no ROS, but will take a look at that project to get my head around it.
@AlejandroSilvestri, that sounds fairly simple, have you done this already?
So I just take:
KeyFrame* pKFcur
and replace the rotation value with my IMU matrix, correct?
Does this:
MapPoint* pMP = new MapPoint(worldPos,pKFcur,mpMap);
use that keyframe's rotation to create the mappoints?
Will I need to do this for every new keyframe? or will setting the initial rotation mean that any keyframe created will be based on this, initial one?
Thanks again, i very much appreciate your help.
@antithing , that's right. Almost.
I believe this is the perfect point to change the code: https://github.com/raulmur/ORB_SLAM2/blob/master/src/Tracking.cc#L702
You should correct pKFcur and pKFini poses with KeyFrame::SetPose, and all map points positions with MapPoint::SetWorldPos or before in its very construction.
pKFini should have the IMU rotation, and pKFcur should preserve the rotation between both keyframes. You must do some matrix products here.
In KeyFrame::SetPose(T) , T is a 4x4 Mat , an isometric transformation in homogeneous coordinates. Mat is the combination of a 3x3 rotation matrix, a vertical 3x1 translation vector, and the last row is [0,0,0,1]:
[ R | t 000 1 ]
T is the "displacement" or "isometry" or "rototranslation" of the world with the camera as reference system. If I'm not mistaken, in (X, Y, Z) X points to the right, Y points down, Z points forward.
About map points: CreateInitialMap receive the map points coordinates, and assign them in a for loop when creating with new MapPoint(...). You must acordingly rotate this coordinates to match IMU before creation, or correct them with MapPoint::SetWorldPos.
Very sorry if it sounds confusing.
@AlejandroSilvestri , I would like to know more about your naming conventions regarding these matrices. (couldn't find any hints about that in the code).
I understand what T, R and t are. But what do the the "wc" or "cw" suffixes in the context of "Twc" or "Rwc" stand for? Some kind of direction of the transformation?
Thanks in advance. Best regards
@diesbot : I believe the two suffixes "wc" and "cw" correspond to the coordinate system transformations, world to camera coordinate system and camera to world coordinate system, respectively. Using the classic pinhole camera model and representing the camera as a point, the absolute camera pose can be simply represented by the rotation and translation from camera to world coordinate system. Indeed, the camera position can be written as tcw = -Rwc' twc and the camera orientation as Rcw = Rwc'.
@AlejandroSilvestri , sorry to bother you again. I have an imu running into stereo slam, to give an initial rotation value, and add stability. BUT sadly, it has lessened stability instead! Now the returned camera position jumps around a lot, suddenly popping a metre to the side and other such things. The initial points and position look good, but as soon as I move the camera, I see jumps, rotation flipping, and weirdness.. If you have a moment, could you have a look at what I have done and tell me where I am going wrong? Thank you!
in Tracking.cc:
void Tracking::StereoInitialization()
Added:
void Tracking::StereoInitialization()
{
if(mCurrentFrame.N>500)
{
// Set Frame pose to the origin
mCurrentFrame.SetPose(cv::Mat::eye(4,4,CV_32F)); //BEN commented to stop the reset back to zero
// Create KeyFrame
KeyFrame* pKFini = new KeyFrame(mCurrentFrame,mpMap,mpKeyFrameDB);
//IMU
//get current pose
cv::Mat tCurrent = pKFini->GetPose();
//strip translation component
cv::Mat tcw = tCurrent.rowRange(0, 3).col(3);
//create output Mat
cv::Mat imuPoseOut = cv::Mat::eye(4, 4, CV_32F);
//Copy IMU matrix in
imuMatrix = trackImu->quatsImu.toRotationMatrix();
cvImuMatrix = Converter::toCvMat(imuMatrix);
cvImuMatrix.copyTo(imuPoseOut.rowRange(0, 3).colRange(0, 3));
//Copy initial translation back.
tcw.copyTo(imuPoseOut.rowRange(0, 3).col(3));
//set keyframe matrix
pKFini->SetPose(imuPoseOut);
//IMU
// Insert KeyFrame in the map
mpMap->AddKeyFrame(pKFini);
// Create MapPoints and asscoiate to KeyFrame
for(int i=0; i<mCurrentFrame.N;i++)
{
float z = mCurrentFrame.mvDepth[i];
if(z>0)
{
cv::Mat x3D = mCurrentFrame.UnprojectStereo(i);
//IMU
//rotate point position by imu quaternion
Eigen::Vector3f vPos;
vPos.x() = x3D.at<float>(0, 0);
vPos.y() = x3D.at<float>(1, 0);
vPos.z() = x3D.at<float>(2, 0);
Eigen::Vector3f vPosFix = Converter::rotateVectByQuat(vPos, trackImu->quatsImu);
float tmpPos[3] = { vPosFix.x(),vPosFix.y(),vPosFix.z() };
cv::Mat x3Dfixed = cv::Mat(3,1, CV_32F, tmpPos);
//IMU
//add point
MapPoint* pNewMP = new MapPoint(x3Dfixed, pKFini, mpMap);
// MapPoint* pNewMP = new MapPoint(x3D, pKFini, mpMap);
pNewMP->AddObservation(pKFini,i);
pKFini->AddMapPoint(pNewMP,i);
pNewMP->ComputeDistinctiveDescriptors();
pNewMP->UpdateNormalAndDepth();
mpMap->AddMapPoint(pNewMP);
mCurrentFrame.mvpMapPoints[i]=pNewMP;
}
}
cout << "New map created with " << mpMap->MapPointsInMap() << " points" << endl;
mpLocalMapper->InsertKeyFrame(pKFini);
void Tracking::CreateNewKeyFrame()
added:
if(bCreateNew)
{
cv::Mat x3D = mCurrentFrame.UnprojectStereo(i);
//IMU
//rotate point position by imu quaternion
Eigen::Vector3f vPos;
vPos.x() = x3D.at<float>(0, 0);
vPos.y() = x3D.at<float>(1, 0);
vPos.z() = x3D.at<float>(2, 0);
Eigen::Vector3f vPosFix = Converter::rotateVectByQuat(vPos, trackImu->quatsImu);
float tmpPos[3] = { vPosFix.x(),vPosFix.y(),vPosFix.z() };
cv::Mat x3Dfixed = cv::Mat(3, 1, CV_32F, tmpPos);
//IMU
// MapPoint* pNewMP = new MapPoint(x3D, pKF, mpMap);
MapPoint* pNewMP = new MapPoint(x3Dfixed,pKF,mpMap);
Please enlighten me.
Are you trying to set an initial rotation, or are you trying to apply many rotations while mapping?
To set an initial rotation (the main topic of this issue), you only need to set it once, during map initialization, or, more difficult, at some random actual keyframe. Essentially you rotate all the map (keyframes and mappoints). Because monocular visual SLAM has no reference (no origin, no scale, no rotation), you can arbitrarily assign one.
This is very different to apply IMU rotation on many (or every) keyframe, because you need a way to compensate orb-slam2 visual measurement of rotation with IMU rotation, and you'll need some filter like Kalman's, to what I believe would be a huge work, like a new paper.
visual measurement of rotation is far more accurate than IMU. On the long run, IMU can compensate drifting (with a filter).
Hi, thanks you for your reply. I just want to set an initial rotation, so that the rotation of the point cloud, and translation vector of the camera pose, is aligned to the IMU rotation. (this is with stereo ORBSLAM2)
If i set it at initialization, will all the points created afterward, and the keyframes, keep this rotation? Is the way i have done it above correct? Thanks again for your time.
I'm not familiar with stereo code, but I believe it may be the same thing.
You must update mCurrentFrame pose too. And I think may be better to assure initialFrame's and LastFrame's poses be consistent.
If you do these in initialization, not need to do so in CreateNewKeyframe. Perhaps it will double your rotation.
Hi everyone, I am facing exactly the same issue as @antithing. I have the modifications especially concerning the StereoInitialization()
part and on the debug windows (point cloud map) I can see clearly that the first frame drawn, pitched and rolled correctly with the entered initial values of my IMU BUT the following frames are not relying on this PKFini frame.
Any Idea?
Hi. i got this working fine. You need to make the adjustments in my code above. ONLY at that point void Tracking::StereoInitialization()
. that should rotate the initial keyframe, and the initial map.
Thank you very much antithing 👍.
@AlejandroSilvestri I am still unclear about how this works.
@antithing , that's right. Almost.
I believe this is the perfect point to change the code: https://github.com/raulmur/ORB_SLAM2/blob/master/src/Tracking.cc#L702
You should correct pKFcur and pKFini poses with KeyFrame::SetPose, and all map points positions with MapPoint::SetWorldPos or before in its very construction.
pKFcur should have the IMU rotation, and pKFini should preserve the rotation between both keyframes. You must do some matrix products here.
In KeyFrame::SetPose(T) , T is a 4x4 Mat , an isometric transformation in homogeneous coordinates. Mat is the combination of a 3x3 rotation matrix, a vertical 3x1 translation vector, and the last row is [0,0,0,1]:
[ R | t 000 1 ]
T is the "displacement" or "isometry" or "rototranslation" of the world with the camera as reference system. If I'm not mistaken, in (X, Y, Z) X points to the right, Y points down, Z points forward.
About map points: CreateInitialMap receive the map points coordinates, and assign them in a for loop when creating with new MapPoint(...). You must acordingly rotate this coordinates to match IMU before creation, or correct them with MapPoint::SetWorldPos.
Very sorry if it sounds confusing.
Referring your remark, "pKFcur should have the IMU rotation, and pKFini should preserve the rotation between both keyframes. You must do some matrix products here.", should it not be the opposite? That pKFini should have the IMU rotation and pKFcur should preserve the rotation between both the key frames?
In general, from what I understand, if the Twc ie world to camera for the initial pose is know, the Tcw_initial can be calculated and assigned to pKFini. Then in https://github.com/raulmur/ORB_SLAM2/blob/f2e6f51cdc8d067655d90a78c06261378e07e8f3/src/Tracking.cc#L630 it should be mCurrentFrame.SetPose(Tcw_new);
where Tcw_new is computed using some matrix multiplications.
I may be missing something very fundamental here. Look forward to your remarks.
@hemangchawla
You are right, my mistake, already edited my former answer. Thank you.
@AlejandroSilvestri @hemangchawla Hello, excuse me I'm new to ORB SLAM and also need to perform IMU reinitialize to ORB SLAM (mono) Could you explain more precisely about the "mCurrentFrame.SetPose(Tcw_new)" and Tcw_new is computed using some matrix multiplications?
The following is my approach: in void Tracking::MonocularInitialization() mCurrentFrame.SetPose(Tcw_new) mentioned in @hemngchawla answer, Original Tcw multipled by imu rotation matrix ? <-- Im not sure whether it is right?
Also, after doing this in Tracking::MonocularInitialization(), the point clouds handling in CreateInitialMap is still needed ?
Thank you!
@Che-Cheng-Chang
1) About SetPose(Tcw_new) You are right, if you are careful. Let's say:
You want to SetPose(Tcg), where
Tcg = Tcw * Twg
So, your IMU rotation must be in the form of Twg. If you have Tgw, you must invert it. Also look for edge orientation:
2) Point cloud I'm not sure if I understood your question. The points created during MonocularInitialization() are the initial map, so they will be needed forever.
@AlejandroSilvestri Thank you for you reply!
void Tracking::CreateInitialMapMonocular()
{
// Create KeyFrames
KeyFrame* pKFini = new KeyFrame(mInitialFrame,mpMap,mpKeyFrameDB);
**// CORE of IMU initialized (Set IMU rotation to pKFini)
cv::Mat Rcw_imu = eulerAnglesToRotationMatrix(theta); // where theta is the euler angle from IMU sensor
cv::Mat iniPose = pKFini->GetPose();
cv::Mat tcw_imu = iniPose.rowRange(0, 3).col(3);
cv::Mat imuPoseOut = cv::Mat::eye(4, 4, CV_32F);
Rcw_imu.copyTo(imuPoseOut.rowRange(0, 3).colRange(0, 3));
tcw_imu.copyTo(imuPoseOut.rowRange(0, 3).col(3));
pKFini->SetPose(imuPoseOut);
// imu quaternion
Quaternionf quatsImu(imu_rot); // imu_rot is also the rotation matrix from IMU sensor
// END of IMU initialized**
KeyFrame* pKFcur = new KeyFrame(mCurrentFrame,mpMap,mpKeyFrameDB);
pKFini->ComputeBoW();
pKFcur->ComputeBoW();
// Insert KFs in the map
mpMap->AddKeyFrame(pKFini);
mpMap->AddKeyFrame(pKFcur);
// Create MapPoints and asscoiate to keyframes
for(size_t i=0; i<mvIniMatches.size();i++)
{
if(mvIniMatches[i]<0)
continue;
//Create MapPoint.
cv::Mat worldPos(mvIniP3D[i]);
** // IMU (My second question)
Eigen::Vector3f vPos = Eigen::Vector3f(worldPos.at<float>(0), worldPos.at<float>(1), worldPos.at<float>(2));
Eigen::Vector3f vPosFix = quatsImu*vPos; // perform coordinate transform on point clouds to each point
float tmpPos[3] = { vPosFix.x(),vPosFix.y(),vPosFix.z() };
cv::Mat x3Dfixed = cv::Mat(3,1, CV_32F, tmpPos);
// MapPoint* pMP = new MapPoint(worldPos,pKFcur,mpMap);
MapPoint* pMP = new MapPoint(x3Dfixed,pKFcur,mpMap);
//** END of IMU (My second question)
pKFini->AddMapPoint(pMP,i);
pKFcur->AddMapPoint(pMP,mvIniMatches[i]);
pMP->AddObservation(pKFini,i);
pMP->AddObservation(pKFcur,mvIniMatches[i]);
pMP->ComputeDistinctiveDescriptors();
pMP->UpdateNormalAndDepth();
//Fill Current Frame structure
mCurrentFrame.mvpMapPoints[mvIniMatches[i]] = pMP;
mCurrentFrame.mvbOutlier[mvIniMatches[i]] = false;
//Add to Map
mpMap->AddMapPoint(pMP);
}
Very sorry if it sounds confusing. My second question is about the IMU (My second question) part. That is based on finished the revision in my first question, whether the point cloud coordinate transform by IMU rotation is correct when running CreateInitialMapMonocular() function?
Thank you again and sorry for bothering
@antithing , at any time you can compare your IMU reading with the current frame pose, determine a correction rotation and apply it to every mappoint and keyframe. The sooner, the faster.
Initial triangulation is made from two frames - the second one is the current one. When initial triangulation succeed, the first of those frame is taken as the origin, the center of the map, with rotation zero. But at that time your actual pose correspond to the second frame (the current frame), not at the map center, nor with zero rotation.
Two places to do it:
- Tracking::CreateInitialMap
- Tracking::CreateNewKeyFrame
The first is easier, you have access to both first keyframes and all mappoints before they go to the map.
The later let you apply your rotation correction whenever you want, but you'll need access to both protected set Map::mspKeyFrames and Map::mspMapPoints, and you'll want to stop local mapper before applying the correction.
Hi,I have many question about ORB-SLAM2.One of them, in "pNewMP->AddObservation(pKFini, i);" ,the "i" is changed with looping, but the "pKFinit" is not changed.so,"mObservations[pKF] = idx" just have a pKFini, Do you think this understand is right? Others,in the stereo situation,firstly enter "TrackReferenceKeyFrame/SearchByBoW", where did the program calculates the "pKF->mFeatVec;"?
@antithing
I'm so sorry I missed your last question a year ago.
I can see you are rotating the initial map points. It's fine if imu_rot is coherent with orb-slam2 space (with the axis order). You know, there are 48 ways to define a 3d rotation, mixing axis and directions of rotations, and you must use the same.
It appears to me you are missing keyframes rotation.
@AlejandroSilvestri maybe, you can look at my questions.Please help me. My english is very poor,I hope you can undertand me.
@AlejandroSilvestri This question should give you,but it is wrong object,because I am a greener. Hi,I have many question about ORB-SLAM2.One of them, in "pNewMP->AddObservation(pKFini, i);" ,the "i" is changed with looping, but the "pKFinit" is not changed.so,"mObservations[pKF] = idx" just have a pKFini, Do you think this understand is right? Others,in the stereo situation,firstly enter "TrackReferenceKeyFrame/SearchByBoW", where did the program calculates the "pKF->mFeatVec;"?
Hi @mathuse ,
Each MapPoint has observations, which are keyframes observing them. The initial map consists of two keyframes and some mappoints, each of them with two observations. The loop you mentioned adds pKFini as an observation to each mappoint.
During mapping, new keyframes are created and mappoints receive new observations.
mFeatVec is calculated when keyframe is created, and sometimes in current frame, when a relocalization takes place.
@AlejandroSilvestri Thank you very much. I think I need some time to understand it. Maybe I have more questions sooner or later.Thanks again.
@AlejandroSilvestri Here are four questions to need to consult you.
@mathuse
It would be better to refer to the code with permalinks to code lines.
Guessing:
1- ComputeBoW is not cheap, it is a heavy process, often delayed until it is needed. Frames usually doesn't need BoW, that's why it is not computed before initialization.
3- Why should you delete new mappoints? You need them, they are part of the map you are building. The map will grow as you explore new areas. Redundant mappoints and keyframes will be eliminated in the culling step.
4- Visual SLAM, like homography, epipolar methods and any geometry calculation benefit from more features only if they are evenly distributed on the image. There is no benefit on many features packed together in one place. FAST doesn't guarantee distributions, its parameter is a threshold. The grid limits the number of features in each cell, and repeat FAST with a more tolerant threshold when there are too little features in that cell. The other benefit is that you can get the features already distributed into cellls.
@AlejandroSilvestri thanks for your reply. 1.If you do not add ComputeBoW,SearchByBow cannot be made on TrackReferenceKeyFrame when initialization is completed for the first time into TrackReferenceKeyFrame.I'm now learning your source code, completing TrackReferenceKeyFrame,and then entering TrackLocalMap. 3.My c++ language is not very good, I think "new" and "delete" need to be paired to use in the same domain.After all,"pNewMP" is a local temporary variable. 4.I cannot understand " The other benefit is that you can get the features already distributed into cellls.".
led by imu rotation mat
can you show me how you get your initial pose from imu data(a_t, w_t, timestamp)?
邮件已收到! @.***
您好,来信已收到,谢谢!
I am looking at integrating an IMU into the system. I don't need it tightly integrated, i just want to pass a starting rotation, so that the initial rotation matches the IMU when the map/tracking is initialized.
If you have time could you please point me in the right direction to start this? Thank you again!