HKUST-Aerial-Robotics / open_quadtree_mapping

This is a monocular dense mapping system corresponding to IROS 2018 "Quadtree-accelerated Real-time Monocular Dense Mapping"
GNU General Public License v3.0
357 stars 87 forks source link

I want to test my own package, but the system doesn't seem to work. #7

Closed TianQi-777 closed 6 years ago

TianQi-777 commented 6 years ago

Hello, I have a set of automatic drive data sets.and I got a series of poses with the stereo DSO.then,I put the poses and raw images into the same data structure as yours.As shown in the following figure: path: DsoPose_grayImg.bag version: 2.0 duration: 2:32s (152s) start: Jul 23 2018 11:31:59.97 (1532316719.97) end: Jul 23 2018 11:34:32.58 (1532316872.58) size: 4.1 GB messages: 5141 compression: none [4833/4833 chunks] types: geometry_msgs/PoseStamped [d3812c3cbc69362b77dc0b19b345f8f5] sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743] topics: /mv_25001498/image_raw 4832 msgs : sensor_msgs/Image
/vins_estimator/camera_pose 309 msgs : geometry_msgs/PoseStamped

I changed the fx,fy,cx,cy,k1,k2... in the example.launch file accordingly.

The results are as follows: open_quadtree_mapping_node Checking available CUDA-capable devices... 1 CUDA-capable GPU detected: Device 0 - GeForce 940MX Using GPU device 0: "GeForce 940MX" with compute capability 5.0 GPU device 0 has 3 Multi-Processors, SM 5.0 compute capabilities

read : width 1280 height 720 has success set cuda remap. inremap_2itial the seed (1280 x 720) fx: 1286.568604, fy: 1286.568604, cx: 634.927002, cy: 404.273163. initial the publisher !

cuda prepare the image cost 8.929000 ms till add image cost 6.984000 ms initialize keyframe cost 3.597000 ms depthimage min max of the depth: 0 , 0 INFO: publishing depth map

cuda prepare the image cost 6.542000 ms till add image cost 2.190000 ms till all semidense cost 40.039000 ms till all full dense cost 0.001000 ms till all end cost 0.538000 ms

It seems that only two frames have output results.Is there a problem with my operation? Or is the algorithm unable to deal with the road scene of automatic drive at present? Is the scene moving too fast to find a match?

WANG-KX commented 6 years ago

There are a lot of factors that can affect the quality. I cannot tell what happened in your dataset. Driving scene is typically difficult for monocular depth estimation since the disparity is not large.

I tried kitti dataset. The result is not so dense but better than "only two frames have output results". Please check it yourself, or give me a link of your bag.

TianQi-777 commented 6 years ago

This link is my package, if you have time, you can test it. I have written all the related instructions in the files. Thank you very much.

WANG-KX commented 6 years ago

I have taken a look at your bag. I won't be surprised that the method does not work well. First, the frame rate is too low. The image-pose pair is slightly above 1 Hz. Both the mapping and VINS relays on high frame rate input data. In my usage, I use 30 Hz image and 200 Hz imu to get 10 Hz pose estimation. Second, the time stamp of PoseStamped and Image is not synchronized. I don't know when the stamp changed in your system. Please fix it. Third, the driving scene is hard for traditional methods that utilize multi-view constraints only. The road is textureless and the disparity is small. Overall, the first reason is the most important factor to improve the result. If you can provide more detailed data, I would like to check. Wish you can get better results.

TianQi-777 commented 6 years ago

This data set does have a low frame rate, but anyway, thank you very much for your reply and verification :)