Closed ank700 closed 7 years ago
Pose is estimated by Tracking in the main thread, the one that consumes the images. It process an image, update the pose and then process the next image. I don't know if you are dropping the images in between (ORB-SLAM doesn't).
How are you measuring pose update rate?
Viewer runs in it own thread, so what you see on screen is not synchronized with Tracking process. But Viewer doesn't display dropped images. If you see a Frame on screen, it was processed by tracking and its pose was estimated.
Map viewer runs on Viewer thread, so it is updated every time Frame viewer is updated.
May be Map viewer has difficult to access the map because it's locked by some other thread adding keyframes and map points.
Hello @AlejandroSilvestri , So viewer and mapping are different threads and have no effect on the pose update rate.
I am using the monocular ros wrapper file in Examples/ORBSLAM2. I am publishing the pose of the camera from ros_mono.cc and then, I check the rate at which this pose is published. I thought that this should be the correct way to see the update rate of the algorithm.
Ok, I'm not familiar with ros_mono.cc, may be it's not publishing the pose quick enough.
Help me on this. I know you send images to orb-slam2 system trough "/camera/image_raw" rosnode. Which is the mean to retrieve pose?
Yes orbslam2 subscribes to the image topic /camera/image_raw. It processes the image and outputs the pose. Now I broadcast this pose message as a tranform (tf) from world to camera. Then, I check the update rate of the tf broadcast which is much less than the camera fps. The camera is capturing images at 30fps and output of orbslam is about 10 Hz.
I calculated the time taken by orbslam2 (using ros::Time) to process one image (640x480) which is about 0.1s, on an i5 laptop. This means that orbslam2 can process only 10 images per second. Similarly, I tried on an embedded pc with the same resolution, the avg processing time is 0.2s and hence a pose update rate of about 5Hz
I'm not familiar with ROS, all I can tell is when GrabImage is called, it processed and track one image, and the function is not available until all of this end and the new pose is estimated.
Perhaps the best place to measure performance is in GrabImage, but I believe the measurement will confirm yours:10 Hz. My guess is GrabImage is dropping frames, it doesn't matter your camera frame rate while it keep GrabImage occupied.
I can add those 10 Hz represent good performance, which go down with more resolution, even more complex image (more keypoints to process), and poor camera calibration.
Yes, GrabImage is probably dropping the frames. I checked again by moving a few metres. 10 keyframes were made and the image id of the last kf was 46, while the camera had already captured more than 600 frames.
10 Hz is good but 5Hz onboard a copter - the copter will have to move very slowly through the environment.
Thanks a lot for your help @AlejandroSilvestri.
You are welcome.
I point out your last keyframe id is 46, meaning there were 46 KeyFrames, but tracking performance is measured by Frames per second. currentFrame.id should tell you how many frames were processed.
Mapping is slower than tracking. If it fits you, you should try to make a map and track it with your copter, it can rise your performance. If fps is still low, remember tracking do relocalisation too, it can update your position in 2 secs.
Good luck with your project.
Oh Yes, 105 is the number of frames processed (from currentframe.id), while more than 600 camera frames had been captured. I was running it at 100 camera fps. Also, at higher camera fps, the lag in the orbslam video stream is visible.
Thanks for the idea. I will definitely try localisation and tracking after a map has been made and check the update rate and also reduce the camera fps to about 30 or less.
Hello, while running the algorithm in real-time at 30fps, I see that the pose update rate is around 8 Hz. When I increase the camera fps to 160, pose update rate increases to about 20Hz only. Can you please explain, why is it so?
I understand that processing each frame takes some time and during this time, some images might have already passed. But, the video stream in the image viewer looks smooth, even if I translate the camera at a fast speed. Also, the difference in fps and update rate is too big.