alejocb / dpptam

DPPTAM: Dense Piecewise Planar Tracking and Mapping from a Monocular Sequence
GNU General Public License v3.0
219 stars 82 forks source link

Questions about DPPTAM #19

Open haopo2005 opened 8 years ago

haopo2005 commented 8 years ago

Hi , I'm reading the code and have some questions about technical details in the paper "DPPTAM: Dense Piecewise Planar Tracking and Mapping from a Monocular Sequence "

.hope you don't mind. :)

A. How to select a keyframe? The paper says "The keyframes are selected from the sequence frames using certain heuristics" . And I can't catch that detail from the code.

Which heuristics do you have? Frame count measure or Residual ratio measure or Pose-based measure? Besides, could you help me to locate the code about initialization of global reference frame and keyframe?

B. I cannot get the result with myself image sequences at the moment. Whether the DPPTAM can cope with large textureless area (e.g. white wall or ceiling of room) if they are far away or near the camera? How about its robustness when the camera is moving in the vertical direction rather than walking parallel to the floor? I think the correctness of segment blob is critical for me.

sunghoon031 commented 8 years ago

A. In SemiDenseMapping.cpp, when the variable "do_var_mapping" is activated (based on the combination of both frame-count measure & pose-based measure as "num_cameras_mapping" reach "num_cameras_mapping_th"), it runs the following code:

semidense_mapper->do_initialization_tracking=1;
...
semidense_mapper->do_var_mapping =0;
semidense_mapper->num_cameras_mapping = 0;
semidense_mapper->num_keyframes++;
...

This is how the new keyframe is added. The keyframe poses are computed in SemiDenseTracking.cpp

images.Im[cont_images]->R = R_cv32fc1.clone();
images.Im[cont_images]->t = t_cv32fc1.clone();

B. There is no algorithm which can cope with the completely texture-less area. But in DPPTAM, as long as these areas are surrounded by high-gradient boundaries, it can map the planes fairly well. You can test it by creating your own rosbag file. Check https://github.com/raulmur/BagFromImages

haopo2005 commented 8 years ago

Thanks for your reply. I have three more questions. A. How to deal with point cloud stitching while the camera is moving?

B. I have got the segment result of yours third-party. The extraction of superpixel looks good. But after dense mapping period, there exists collapse in the final RBGD

point cloud. The result isn't strictly dense.Can you explain the last paragraph in V.B Dense Mapping in detail? What's semidense mapping filter? How does it work?

C. As for creating rosbag file,the dpptam has been feed with a dead loop of video image sequence (when the video is over, it will start from the first frame). The video

is recorded in advance as offline data.Maybe the wrong time stamp leads to the failure of dpptam. Are there any switch to control the frequency of dpptam? Can loop image sequence be supported in dpptam?

sunghoon031 commented 8 years ago

A. See join_maps function in SemiDenseTracking.cpp B. Maybe someone else can better answer this question, since I haven't investigated the dense mapping fully yet. C. With the link I shared before, you can manually set the fps for the rosbag. If you're worried that dpptam might skip too many frames because it's too slow, then you may wanna try and recreate a rosbag file with 1 fps or something. For the real-time application, the time-stamp is not doing anything critical in the code. I don't know what you mean by "when the video is over, it will start from the first frame". You wanna run the same video several times?