raulmur / ORB_SLAM2

Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
Other
9.44k stars 4.69k forks source link

the camera's trajectory #262

Open voladorfly opened 7 years ago

voladorfly commented 7 years ago

When the handheld camera movement, the camera's trajectory is easy to stick together.Then the rotating in one spot.How should solve?

AlejandroSilvestri commented 7 years ago

You can't. ORB-SLAM2 can't map on pure rotations. You must translate while rotating.

voladorfly commented 7 years ago

I translate while rotating.But the key frames are also easy to stick together。

AlejandroSilvestri commented 7 years ago

If you are monitoring orb-slam "live", you can see the number of tracked points, don't let them go to less than 30, because there is where orb-slam looses track.

You can translate laterally in order to "make" more keyframes with parallax, what orb-slam needs to triangulate new points.

And, if you move slower, you can give enough time to local mapping so it can finish processing new points and add them to map: you can see them popping all toghether in groups.

hg-wang2015 commented 7 years ago

I got the same problem, but when i use the Kinect1 sensor, this problem never appear. So i think it caused by the Xtion sensor not the ORB_SLAM2 program, the probable reason maybe is the incorrect launch method for the Xtion sensor, making it can not send correct data to ORB node. BUT, so far, i can not solve this problem. If you did, please tell me. Thanks!

Morpheusai commented 7 years ago

Hi, i'm using a monocular camera. I want to get trajectory of every frame. ORB_SLAM and ORB_SLAM2 don't support out pose of each frame, but only the keyFrame. Please tell me why? thanks a lot.

AlejandroSilvestri commented 7 years ago

@Morpheusai

You can get the actual frame pose from

Tracking.mCurrentFrame.mTcw

I believe in main you can get

SLAM.mpTracker->mCurrentFrame.mTcw
ichlubna commented 3 years ago

@AlejandroSilvestri thank you for the information. So do I understand correctly that pure camera tilt and pan movements are not detectable with ORB_SLAM2 at all?

ichlubna commented 3 years ago

@AlejandroSilvestri Also thanks for the idea with the current pose but it doesn't seem to be working. The mTcw is empty and basically contains the same thing as the result of TrackMonocular(). This code:

std::cerr << SLAM.TrackMonocular(im,tframe) << std::endl;
std::cerr << SLAM.getTracker()->mCurrentFrame.mTcw << std::endl; //the same what you suggested, just made a getter

Prints the same result, providing pose matrix only for keyframes. Related to #436

AlejandroSilvestri commented 3 years ago

@ichlubna,

TrackMonocular() returns mTcw, they are the same thing, they are the Tcw transform matrix. System must be tracking or mapping, meaning it must have been initialized and not lost. Then Tcw will be a valid 4x4 euclidean transform. Remember Tcw is the world pose in the camera reference. You need its inverse Twc, the camera pose in the world reference.

AlejandroSilvestri commented 3 years ago

@AlejandroSilvestri thank you for the information. So do I understand correctly that pure camera tilt and pan movements are not detectable with ORB_SLAM2 at all?

All movements are detectable in tracking mode, provided you stay in map. While mapping new areas ORB-SLAM2, like any monocular SLAM, can't map on pure rotations pan and tilt. These systems need a camera displacement in order to get enough parallax to triangulate new mappoints. So you can rotate while translating.

ichlubna commented 3 years ago

@AlejandroSilvestri I see, thanks for the answers. The problem about the matrix is that the Tcw is empty for most of the frames. So what you're saying is that the matrix won't be available for the first run? My use-case is having an unknown video sequence where I need to get camera positions while streaming new frames. Is ORB_SLAM2 able to do so?

AlejandroSilvestri commented 3 years ago

@ichlubna ,

That's right. The main problem with offline video sequence is the visual SLAM initialization. ORB-SLAM2 doesn't initialize on any random sequence, and some camera moves are known to be better for initialization.

ichlubna commented 3 years ago

@AlejandroSilvestri Many thanks for the clarification!

ichlubna commented 3 years ago

@ichlubna ,

That's right. The main problem with offline video sequence is the visual SLAM initialization. ORB-SLAM2 doesn't initialize on any random sequence, and some camera moves are known to be better for initialization.

I'd like to ask one more thing if you don't mind. Is it possible to retrieve the previous camera positions after the initialization? Or do I have to send the images in again?

AlejandroSilvestri commented 3 years ago

@ichlubna

No, before initialization no camera pose was computed.

I use to reverse the video to process those images in the map and get their poses.

ichlubna commented 3 years ago

@AlejandroSilvestri Thank you very much!