dunbar12138 / DSNeRF

Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)
https://www.cs.cmu.edu/~dsnerf/
MIT License
746 stars 126 forks source link

Do we need to run COLMAP with the exact poses given in the datasets like DTU and Blender #100

Open Dharmendra04 opened 1 year ago

Dharmendra04 commented 1 year ago

In your project, you utilized img2poses.py to generate poses via COLMAP. Additionally, you employed the same poses_bounds.npy to create both Depth rays and normal rays.

I have a question regarding other datasets such as DTU and Blender. Do we need to run Colmap using the poses provided in their respective datasets?

DTU or Blender datasets have their own poses written in a json, or npy file. should I need to use the same poses given in the datasets to run the COLMAP in order to obtain the sparse point clouds?

I'm wondering if running COLMAP without providing any poses to generate a sparse point cloud will result in similar depth lengths, as the poses used for producing ground truth depths(poses obtained from COLMAP) and rendered depths(Poses of Datasets) would be different.

I am creating a depth incorporation model similar to yours and am only able to get good results only for llff datasets. that's why I would like to know how did you run the COLMAP for datasets like Blender and DTU?

dunbar12138 commented 12 months ago

Yes, we run COLMAP with the given poses on DTU.

https://colmap.github.io/faq.html#reconstruct-sparse-dense-model-from-known-camera-poses

Navaneeth-Sivakumar commented 11 months ago

What about blender dataset? Is depth NeRF compatible with blender dataset? Does this nerf work only with sparse images or can we provide it with many images.