Open liqile opened 9 years ago
I know what's the reason now. orbslam will use the velocity of last 2 frames, this influence may be remained after changing a direction, so I just remove the utilizing of velocity, and it works well.
Hi @liqile I notice the same problem, can you tell me what kind of changes you done?
hi @Gingol
in the source file "Tracking.cc"
, you can find the function
void Tracking::GrabImage(const sensor_msgs::ImageConstPtr& msg)
on the line 229~236, you can find following code:
229: if(!mbMotionModel || mpMap->KeyFramesInMap()<4 || mVelocity.empty() || mCurrentFrame.mnId<mnLastRelocFrameId+2)
230: bOK = TrackPreviousFrame();
231: else
232: {
233: bOK = TrackWithMotionModel();
234: if(!bOK)
235: bOK = TrackPreviousFrame();
236: }
what I have done is that to change the code above to following:
bOK=TrackingPreviousFrame();
@Gingol you can try to combine orb slam with imu, this may also deal with that problem
@Gingol do you have QQ?
@liqile 1) I've tried to change the code like you said but it doesn't work. After a turning the distance between the poses of the camera in rviz are very small or in some cases it seems that the camera doesn't move at all. 2) In our project we want to use only the camera. 3) No I haven't QQ.
@Gingol what's the number of features in your proj.? I set the feature amount to 8000
@liqile I kept the default number 1000. Can you tell me which camera and which lens you are using?
@Gingol 720 x 1280
@Gingol can you give me your ros bag with the camera calibration? li_qi_le@163.com
@Gingol I think you can use the camera on your iphone
Hello @liqile, You mentioned combining Orbslam with an IMU. How do you propose to implement that, perhaps with a Kalman filter?
@Oblynx the easiest the approach would be to replace the constant velocity motion model with imu motion model.
@versatran01 Can you elaborate on how we can do that? Would metric scale estimation be possible with such a approach?
I am also facing the same problem . Has anyone come up with a solution .I donot have the IMU in my case.
@shabhu18 hi, I think you can take utilization of a new camera. I used the camera on a xiaomi phone , and it worked well
@mhkabir To start with, you could just integrate the inter-frame gyro readings as the guess of inter-frame rotation. The accelerometer readings are probably too noisy to estimate translation.
@liqile hello, about your comment here: ####### "in the source file "Tracking.cc", you can find the function void Tracking::GrabImage(const sensor_msgs::ImageConstPtr& msg) on the line 229~236, you can find following code:
229: if(!mbMotionModel || mpMap->KeyFramesInMap()<4 || mVelocity.empty() || mCurrentFrame.mnId<mnLastRelocFrameId+2) 230: bOK = TrackPreviousFrame(); 231: else 232: { 233: bOK = TrackWithMotionModel(); 234: if(!bOK) 235: bOK = TrackPreviousFrame(); 236: } what I have done is that to change the code above to following: bOK=TrackingPreviousFrame(); " #######
how did you observe this line was in charge of the model velocity? and how did this velocity been determined? could you please give me some hints? i still could't find out how the scale of map been calculated. thx
Hi @mhkabir, did you end up looking in to changing the velocity motion model? I've experimented the combinaition of ORB_SLAM with MSF but the results are still inconclusive.
@AlexandreBorowczyk No, I gave up on it. Rolled our own visual-inertial odometry system.
hi, @raulmur thanks for your sharing your orb-slam code .sorry to trouble you but I have a trouble while tracking I firstly go straight towards north and then turn to east orb-slam can calculate the degrees very well, but the scale of distance changed obviously after turning to east. the walking distance towards north and towards east are nealy the same, but on the rviz, the distance to east is obviously shorter than that to north could you give me some suggestions ? thanks and best