-
I have a few doubts, if you could kindly clear them
1. Could you clarify the number of layers in the depth map produced by the MiDaS model and how these different layers relate to the depth predict…
-
First, thank you for sharing this great project. I have two questions.
In dgp_dataset.py, it seems input depth map and gt depth map is the same, in packnet-SAN, are you using some downsample method w…
-
### Checklist
- [X] I have searched for [similar issues](https://github.com/isl-org/Open3D/issues).
- [X] For Python issues, I have tested with the [latest development wheel](https://www.open3d.org/d…
-
Currently we're using 32 bit depth buffers.
We could probably get away with 24 bit depth, or maybe even 16 bit depth for shadow maps once we improve precision there.
Less bits = faster to write and …
JMS55 updated
1 month ago
-
Hi,
I tried visualizing the depth map on one of the test images. But I am getting some blurring in the depth map:
On the other hand models like ZoeDepth produce a sharper depth map:
![dow…
-
Great work!
I'm trying to experiment with depth maps. Usually for 3D reconstruction it's great to have a normals + depth pair (like in the humanNORM paper), but adding a 3rd domain with rendered de…
-
Thank you for your contribution
I saw in your metric depth description that the output of the pre trained model can be used as a disparity map. Now, I want to use a custom dataset that includes RGB i…
-
Hmm, I'm used to having depth maps where white is closer and black or grey areas further back. Did I miss something or are their 2 ways to handle this?
Yours..
![image](https://user-images.git…
-
hi,
it's a very nice projet with good speed (10 fps) but the depth map is horrible.
i don't know where it's come from but at the end the depth map is wrong.
maybe i build it wrong ?
-
Hi author, I notice that when rendering depth with blender, there is an action as follows:
```python
map.offset[0] = -0.5
map.size[0] = 1 / (2.5 - 0.5)
```
I would like to ask why there is an off…
YZsZY updated
1 month ago