barbararoessle / dense_depth_priors_nerf

Dense Depth Priors for Neural Radiance Fields from Sparse Input Views
MIT License
378 stars 49 forks source link

Models to reproduce the results on ScanNet and Matterport3D #23

Open csBob123 opened 1 year ago

csBob123 commented 1 year ago

Hi,

Thank you so much for your work. Will you provide your pre-trained models to reproduce the results in your paper (e.g., Tab2 and Tab3)?

Thank you for your attention.

barbararoessle commented 1 year ago

ScanNet Pretrained depth prior network At the time, when we wrote the paper, we had 3 scenes. They are listed in supplementary: scene0710_00, scene0758_00, scene0781_00. So the metrics in the paper are based on those. The scenes scene0708_00 and scene0738_00, were only added while preparing the code release. The preprocessed ScanNet scenes are here.

Matterport3D Pretrained depth prior network I uploaded the 3 Matterport3D rooms. The sparse depth is generated as described in the paragraph "Matterport3D" of section 4.1 in the paper. For Matterport3D the config is a bit different than explained in the README.md for ScanNet. Those are the parameters that are different from ScanNet: --depth_completion_input_h 256 --input_ch_cam 16 --depth_loss_weight 0.007

csBob123 commented 1 year ago

ScanNet Pretrained depth prior network At the time, when we wrote the paper, we had 3 scenes. They are listed in supplementary: scene0710_00, scene0758_00, scene0781_00. So the metrics in the paper are based on those. The scenes scene0708_00 and scene0738_00, were only added while preparing the code release. The preprocessed ScanNet scenes are here.

Matterport3D Pretrained depth prior network I uploaded the 3 Matterport3D rooms. The sparse depth is generated as described in the paragraph "Matterport3D" of section 4.1 in the paper. For Matterport3D the config is a bit different than explained in the README.md for ScanNet. Those are the parameters that are different from ScanNet: --depth_completion_input_h 256 --input_ch_cam 16 --depth_loss_weight 0.007

Thank you for your explanation.

I found there are three/five scenes (scene0708_00, scene0710_00, .., scene0781_00) on scannet, and we can get three/five accuracy files after the training. I feel confused about the overall accuracy in Tab. 2. As we only have five independent accuracy files, do you use any extra code to get the overall accuracy in Tab. 2?

Thank you so much for your response.

barbararoessle commented 1 year ago

The results in Tab. 2 are an average over the results from the scenes (scene0710_00, scene0758_00, scene0781_00).

csBob123 commented 1 year ago

The results in Tab. 2 are an average over the results from the scenes (scene0710_00, scene0758_00, scene0781_00).

Thank you for your details. So, if we have 3 PSNR (20.32, 20.53, 21.10) for 3 scenes, the result should be (20.32 + 20.53 + 21.10)/3.0? Also, we can do the same thing for SSIM or RMSE?

Many thanks!

csBob123 commented 1 year ago

ScanNet Pretrained depth prior network At the time, when we wrote the paper, we had 3 scenes. They are listed in supplementary: scene0710_00, scene0758_00, scene0781_00. So the metrics in the paper are based on those. The scenes scene0708_00 and scene0738_00, were only added while preparing the code release. The preprocessed ScanNet scenes are here.

Matterport3D Pretrained depth prior network I uploaded the 3 Matterport3D rooms. The sparse depth is generated as described in the paragraph "Matterport3D" of section 4.1 in the paper. For Matterport3D the config is a bit different than explained in the README.md for ScanNet. Those are the parameters that are different from ScanNet: --depth_completion_input_h 256 --input_ch_cam 16 --depth_loss_weight 0.007

Thank you so much for your detailed reply. I found the above links of the depth completion network for Scannet and mattport3D are the same. Do you use the same depth completion model network for both Scannet and mattport3D?