-
Current 3DGS training pipeline is heavily relying on the SfM initialization, which introduces a significant overhead when scaling to large scenes. Technically, 3DGS should be capable of producing rele…
-
thanks for your code!
I have a question about MVS4Net.py
“line 109 outputs = self.mono_depth_decoder(outputs, depth_values[:,0], depth_values[:,1])”
depth_values[:,-1] seems to d_max rather than d…
-
Link to another project: **DPT (Dense Prediction Transformers)** - State of the art Semantic-segmentation and Monocular depth estimation network
* Top-1 accuracy on Pascal-Context Semantic segmenta…
-
As lots of excellent work is going to appear in CVPR 2022, we are collecting the newly released papers related to human mesh recovery (a.k.a. 3D human pose and shape estimation).
If you have any pr…
-
Hi there,
I have ran the monocular depth estimation model using the mono-depth fine-tuned on Kitti on one one of my images:
`python run_monodepth.py --model_type=dpt_hybrid_kitti `
I want to extr…
-
## Method
In this section, we describe our unsupervised framework for monocular depth estimation. We first review the self-supervised training pipeline for monocular depth estimation, and then introd…
-
Is there any function to get the predicted **metric depth** from video or image? For my use case, I want metric depth in real time.
-
Hi Zhenyu,
the point cloud generation script
https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox/blob/main/tools/misc/visualize_point-cloud.py
seems to be specific for the NYU dataset…
-
thanks for your nice work!
I trained the D_net on DTU dataset, the training loss declined normally(avg depth_error 5mm), but on the validation datase, the loss is high(avg depth_error 50mm). That se…
-
Thanks for sharing the training scripts! I have two questions about the training data. In the data files, I noticed the use of occlusion files. Do they belong to the sub-dataset called Disparity Occlu…