-
Hi,
Thanks for your amazing work.
I have one question. I am trying the fusion scaling demo on a 640x480 image. However, the saved depth .npy file tells that its size is 504x364.
Is this bec…
-
Hi! I was trying to use project_points() to get back a depth map from a modified point cloud. I noticed that there was a black grid pattern on the outputted depth map - not sure if I did something wro…
-
Hey @xjqi , thanks for sharing the code. When I ran the code.py, I have updated both the prediction files (depth_pred.mat and norm_pred.mat) and estimation files(depth_estimate.mat and norm_estimate.m…
-
I ran metric depth model on my input RGB image and got an output metric depth prediction. After resizing the metric depth prediction to the original input RGB image, the visualization (colorized/norma…
-
Hi Xharlie,
thank you very much for publishing your code!
I try to apply your method to some own data and I would like to train the MVSNet on it. Could you please provide some guidance on how to…
-
1. Investigate SOTA dnn-based monocular depth estimation models and test performance on our GPUs
2. Integrate model into ROS 2 node
3. (separate task) Generating point clouds from depth estimation out…
-
Thank you for your contribution
I saw in your metric depth description that the output of the pre trained model can be used as a disparity map. Now, I want to use a custom dataset that includes RGB i…
-
When using torch.hub to do monocular estimation, it got RuntimeError:
```
model = torch.hub.load('yvanyin/metric3d', 'metric3d_vit_giant2', pretrain=True)
File "/home/qizhong/miniconda3/envs/ga…
-
# **Objetivos:**
Verificar as equações do filtro e caso seja necessário modificar-las para utilizar as predições.
- **Contexto:** Um jogo da SSL é dinâmico e rápido, com isso, os sensores de visão, no…
-
Hello! Could you please tell me how to scale the depth estimation if my focal length is 1101.8513?
I use the nyu pretrained model and this scaling equation:
`pred_depth_scaled = pred_depth * 518…