In your file you are using img2poses.py to create poses through COLMAP, and using the same poses_bounds.npy created using this colmap llff dataset, but for other datasets like DTU, and Blender do we need to run the colmap with the poses given for each datasets?
For example in DTU or Blender datasets used in NeRF, they have their own poses written in a json, or npy file. should we need to use the same poses to run the COLMAP.
Because I am creating a depth incorporation model similar to yourself and am only able to get good results only for llff datasets. How did you obtain your ground truth depths for other datasets like DTU and Blender, have you run the colmap with their poses, or without giving any poses?
In your file you are using img2poses.py to create poses through COLMAP, and using the same poses_bounds.npy created using this colmap llff dataset, but for other datasets like DTU, and Blender do we need to run the colmap with the poses given for each datasets?
For example in DTU or Blender datasets used in NeRF, they have their own poses written in a json, or npy file. should we need to use the same poses to run the COLMAP.
Because I am creating a depth incorporation model similar to yourself and am only able to get good results only for llff datasets. How did you obtain your ground truth depths for other datasets like DTU and Blender, have you run the colmap with their poses, or without giving any poses?