dunbar12138 / DSNeRF

Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)
https://www.cs.cmu.edu/~dsnerf/
MIT License
746 stars 126 forks source link

how to generate sparse 3d points given known cameras #64

Closed Wanggcong closed 1 year ago

Wanggcong commented 1 year ago

Hi, thank you for the work.

I have a question about the sparse 3d points given known cameras. I have read this issue and this issue.

I also carefully read this link of colmap. There are multiple steps, like image.txt, cameras.txt, and point3D.txt. I would like to know how to prepare these to align your work to better reproduction for comparison.

I think this is the way (with known camera poses) you prepare NeRF real (LLFF) and DTU since we need unified poses for training and test images. The steps are as follows:

(1) estimate poses with training and test images (because we need a unified coordinate for training and test). (2) With camera poses of training images, we generate sparse 3d points. Note, we cannot access test images to generate sparse 3d points.

In the code, the paper only provides code for python imgs2poses.py <your_scenedir>

It would be better if a script like python imgs2poses.py <your_scenedir> <your camera poses> <your camera intrinsics> is provided. I would like to know how to prepare these to align your work to better reproduction for comparison.

Thank you.

dunbar12138 commented 1 year ago

Hi, thanks for your suggestion!

We provide the test poses aligned with training poses in the LLFF dataset here. Let me know if that's still not enough for reproduction or comparison.

For the DTU dataset, we're also considering releasing our pre-processed data with the depth information.

Thanks for your suggestion about updating the script. We'll also consider making it if possible.

woominsong commented 1 year ago

@dunbar12138 Hi, thanks for sharing your inspiring work! Is there any update on releasing the pre-processed DTU dataset with the depth information?