Open mjmjeong opened 3 years ago
Thanks for your questions and comments.
Hi, thanks for sharing your code for this work.
I am currently working on a nerf related project where I am using the point cloud and camera poses generated by COLMAP in my pipeline. I wanted to understand from your experience, how did you align your scene (point cloud obtained from COLMAP) with the volume which the NeRF considers (the near and far bounds of the scene). I saw in the code that you are using the depth which is calculated as follows: depth = (poses[id_im-1,:3,2].T @ (point3D - poses[id_im-1,:3,3])) * sc
. But did you just verify whether the point3D which you are considering here obtained from COLMAP, also lies in the same location in the NeRF volume rendering space? I am asking this because NeRF's load_llff_data
function does a number of scaling and rotations operations on the poses which are loaded from the COLMAP. I was just wondering won't it affect the depth values used during the training process?
I hope I made my point clear. Let me know if not.
Thanks, Aditya
Hi, thanks for sharing your code for this work.
I am currently working on a nerf related project where I am using the point cloud and camera poses generated by COLMAP in my pipeline. I wanted to understand from your experience, how did you align your scene (point cloud obtained from COLMAP) with the volume which the NeRF considers (the near and far bounds of the scene). I saw in the code that you are using the depth which is calculated as follows:
depth = (poses[id_im-1,:3,2].T @ (point3D - poses[id_im-1,:3,3])) * sc
. But did you just verify whether the point3D which you are considering here obtained from COLMAP, also lies in the same location in the NeRF volume rendering space? I am asking this because NeRF'sload_llff_data
function does a number of scaling and rotations operations on the poses which are loaded from the COLMAP. I was just wondering won't it affect the depth values used during the training process?I hope I made my point clear. Let me know if not.
Thanks, Aditya
Hello, I would like to ask why the depth is calculated this way? and What do 'sc' and 'bds_raw' mean?
Thank you for sharing your code, but I have questions.
For example of 2-view training,
How did you solve this gap between 2-view(few-view) data and test data? If possible, could you share the code for calibrating camera pose or depth value?