Open YuiNsky opened 5 months ago
I believe depth nerfacto is hooked up to DepthDataset, where if you don't provide any depth data, then ZOE (state of the art monocular depth estimation) depth images will be generated. To be honest though, I would recommend a way that I think will perform better:
I believe depth nerfacto is hooked up to DepthDataset, where if you don't provide any depth data, then ZOE (state of the art monocular depth estimation) depth images will be generated. To be honest though, I would recommend a way that I think will perform better:
- Run colmap depth and get the sparse points
- Run the monocular depth model on the images.
- For each image, compute a scale factor and offset with respect to the colmap points seen in each image.
- Multiply the depth image from the model by the scale factor and add the offset
- Train the nerf!
As you mentioned, without having depth information, depth-nerfacto uses ZOE to estimate the depth. However, when I run it, I encounter the following error. Any feedback?
/nerfstudio/models/depth_nerfacto.py", line 86, in get_metrics_dict
raise ValueError(
ValueError: Forcing pseudodepth loss, but depth loss type (DepthLossType.DS_NERF) must be one of (<DepthLossType.SPARSENERF_RANKING: 3>,)
I believe depth nerfacto is hooked up to DepthDataset, where if you don't provide any depth data, then ZOE (state of the art monocular depth estimation) depth images will be generated. To be honest though, I would recommend a way that I think will perform better:
- Run colmap depth and get the sparse points
- Run the monocular depth model on the images.
- For each image, compute a scale factor and offset with respect to the colmap points seen in each image.
- Multiply the depth image from the model by the scale factor and add the offset
- Train the nerf!
As you mentioned, without having depth information, depth-nerfacto uses ZOE to estimate the depth. However, when I run it, I encounter the following error. Any feedback?
/nerfstudio/models/depth_nerfacto.py", line 86, in get_metrics_dict raise ValueError( ValueError: Forcing pseudodepth loss, but depth loss type (DepthLossType.DS_NERF) must be one of (<DepthLossType.SPARSENERF_RANKING: 3>,)
Try setting --pipeline.model.depth-loss-type SPARSENERF_RANKING
Thanks for the great work of this project, i wanna know how to use monocular estimated depth to supervise the training? since colmap depth is too sparse