-
It seems that the hugging face models for metric depth are not producing depth values.
https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Indoor-Base-hf
If I download the raw model weig…
-
Hello, thanks for the code.
I tried running it on a COLMAP-based dataset, but I am encountering an issue where the training performance degrades as training progresses. Could you please let me know …
-
First of all, thank you for revealing the code for your CVPR paper.
I am studying about depth estimation. Can I get the code for the MSL depth estimation part of the paper?
I will thank for your r…
-
Hello,
I noticed that the detection range for this work is 150 metres. Why is the maximum depth of the depth estimation network 110 metres?
-
### Feature request
The image processors of depth estimation models could benefit from a `post_process_depth_estimation` method, similar to the `post_process_object_detection`, `post_process_seman…
-
As in title, the code of metric depth estimation in V2 doesn't seem to include ZoeDepth. But in V1, it is obvious to use ZoeDepth. Why this change?
-
Useful codes for depth estimation:
- [https://github.com/lppllppl920/EndoscopyDepthEstimation-Pytorch](url)
![Image](https://user-images.githubusercontent.com/71411474/244472903-6ff3a0dd-e5f8-47d5-a…
-
I can see your models are relative depth estimation but you only have a guide for metric-depth estimation.
How do I train for relative depth estimation instead?
-
Hi,
We are trying to run SVO with ROS on a DOWNLOOKING camera and high frequency texture using iPhone camera with 60 fps.
We are observing a strange phenomenon of the ground "rising" as the came…
-
Hi , Thank you so much for your dataset . This dataset has helped me a lot in my research .
I know its been really long time since paper and parts of the dataset is been released . I thank you guys s…