google / dynamic-video-depth

Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".
https://dynamic-video-depth.github.io
Apache License 2.0
266 stars 40 forks source link

question about the Pre-processing #1

Closed Robertwyq closed 3 years ago

Robertwyq commented 3 years ago

Can you provide the code for preprocessing part? I wonder for dynamic video, how to get accurate camera pose and K? I see you use DAVIS for example, I want to know how to deal with other videos in this dataset.

ztzhang commented 3 years ago

Hi,

Most of the preprocessing was done using internal tools at google, but as a rule of thumb, it usually works well to use mask-rcnn for segmenting the foreground object, and then run colmap with the segmented masks.

For the Davis example, most the calibration is done using the camera calibration tool in Nuke, with some mannual selection of detected keypoints.

Robertwyq commented 3 years ago

Thank you for your reply. I noticed that you mentioned using ORBSLAM2 and COLMAP to produce camera pose estimates in your paper. I wonder whether the above method can be successful without mask or manual selection for only a small number of foreground moving objects.

ztzhang commented 3 years ago

orbslam typically works with reasonable dynamic scenes but it does require camera intrinsics. I think you can assume a reasonable focal length, and pass the keypoints and camera calibrations to colmap for further optimizations.

Robertwyq commented 3 years ago

Thank you very much😁

Robertwyq commented 3 years ago

I want to confirm the pose information required for the preprocessing process. Is the coordinate system of pose consistent with colmap? It is the world to camera coordinate system Tcw.

ztzhang commented 3 years ago

Hi, we assume a x right, y down Image coordinate system where the origin is top left, and the pose matrices in the npz files are camera2world transformations.

Robertwyq commented 3 years ago

Add an additional discovery, does the text of the output video confuse refine and initial? Looks like the first one is initial, and second one is refine. image image

ztzhang commented 3 years ago

I think the order is correct; the initial depth will be flickering due to inconsistency.

The refined depth might suffer some detail loss due to flow inaccuracies on fast moving/ thin structures

Robertwyq commented 3 years ago

Thank you for your reply. When I tested some videos of road scenes, I found that it would blur a lot of distant details, but I found that there were still a lot of details in the initial estimation of the network. I want to know if there are any suggestions on network parameter adjustment. I think it is mainly due to the influence of flow information. Can you give me some suggestions? image image

ztzhang commented 3 years ago

Hi, since our method takes optical flow and camera poses as geometric cues, objects further in the scene need more accurate flows and larger baselines. Single image depth maps are trained in a supervised way, therefore, are agnostic to such issues, but not temporally consistent.

Robertwyq commented 3 years ago

Thank you for your reply.