Closed Robertwyq closed 3 years ago
Hi,
Most of the preprocessing was done using internal tools at google, but as a rule of thumb, it usually works well to use mask-rcnn for segmenting the foreground object, and then run colmap with the segmented masks.
For the Davis example, most the calibration is done using the camera calibration tool in Nuke, with some mannual selection of detected keypoints.
Thank you for your reply. I noticed that you mentioned using ORBSLAM2 and COLMAP to produce camera pose estimates in your paper. I wonder whether the above method can be successful without mask or manual selection for only a small number of foreground moving objects.
orbslam typically works with reasonable dynamic scenes but it does require camera intrinsics. I think you can assume a reasonable focal length, and pass the keypoints and camera calibrations to colmap for further optimizations.
Thank you very much😁
I want to confirm the pose information required for the preprocessing process. Is the coordinate system of pose consistent with colmap? It is the world to camera coordinate system Tcw.
Hi, we assume a x right, y down Image coordinate system where the origin is top left, and the pose matrices in the npz files are camera2world transformations.
Add an additional discovery, does the text of the output video confuse refine and initial? Looks like the first one is initial, and second one is refine.
I think the order is correct; the initial depth will be flickering due to inconsistency.
The refined depth might suffer some detail loss due to flow inaccuracies on fast moving/ thin structures
Thank you for your reply. When I tested some videos of road scenes, I found that it would blur a lot of distant details, but I found that there were still a lot of details in the initial estimation of the network. I want to know if there are any suggestions on network parameter adjustment. I think it is mainly due to the influence of flow information. Can you give me some suggestions?
Hi, since our method takes optical flow and camera poses as geometric cues, objects further in the scene need more accurate flows and larger baselines. Single image depth maps are trained in a supervised way, therefore, are agnostic to such issues, but not temporally consistent.
Thank you for your reply.
Can you provide the code for preprocessing part? I wonder for dynamic video, how to get accurate camera pose and K? I see you use DAVIS for example, I want to know how to deal with other videos in this dataset.