tum-vision / lsd_slam

LSD-SLAM
GNU General Public License v3.0
2.6k stars 1.23k forks source link

Get the position of camera and pointcloud in the slam-viewer #51

Open EvanceChen opened 9 years ago

EvanceChen commented 9 years ago

Sorry for the similar issue with #24 ,but the question is a little different. My purpose is to get the updated position of camera and all the pointcloud in order to mapping to the 2-d map. And publish it as a topic in ROS. (maybe ROSOutput3DWrapper::publishTrajectory in ROSOutput3DWrapper, but it is still not implemented ) Therefore, other node on the ROS can easily get the result of LSD-SLAM. (Just like the way slam-viewer get the result )

So i started working on the code of slam-viewer. Below is what i understand with the code,but i am not sure if i understand it correctly: i think i can get the (x,y,z,intensity) of pointcloud from KeyFrameGraphDisplay::draw() in KeyFrameGraphDisplay.cpp. And the function flushpc() in KeyFrameDisplay deal with the position of pointclouds.

However i had difficulty to understand how the cam position being processed. i think "camToWorld" in KeyFrameDisplay.cpp may be what i need. But the data structure has stopped me, especially Sim3f, which is the type of "camToWorld"

My question is

  1. is the variable "camToWorld" in KeyFrameDisplay.cpp storing the cam position?
  2. should i know how the Sophus group work?(it seem involved lots of linear algebra that i had no clue ) Is there any suggested document i can ref before i go further with this code? 3.Or i just digging into the wrong direction?

Thanks a lot. Regards,

JakobEngel commented 9 years ago

Yes, you should probably have a look at how 3D pose representations with Lie-Algebras work. a Sim3 pose contains the translation, rotation of the camera, as well as the scale of the camera.

QichaoXu commented 9 years ago

Hi,

I have got the (x,y,z,intensity) of pointcloud from KeyFrameGraphDisplay::draw() in KeyFrameGraphDisplay.cpp. The result looks like: 0.486514 -0.521726 1.2571 171 0.495575 -0.524695 1.26373 151 0.501138 -0.526741 1.26836 141 ... But I found that the coordinate of pointcloud(x, y, z) has been proportional scaled, compared to the value in real world. For example, the coordinate of a 10_20_20(meter) room is represented only by (0.1, 0.2, 0.2).

My question is:

  1. What should I do if I wanna measure the length or width of an object in real world?
  2. if the coordinate of pointcloud(x, y, z) is proportional scaled, where could I find the scale coefficient?
  3. Or is there other possibilities?

Thanks a lot.

ghost commented 8 years ago

I think keyframe scale coefficient could be found by: In KeyFrameGraph::addKeyFrame, add std::cout << camToWorld_estimate.scale() << std::endl. :-)