-
https://github.com/kskin/WaterGAN/issues/3
The code seems to fail without depth data(*.mat).
-
Hello, @poier , George.
Thanks for sharing.
I wonder whether this model can be fit to a 2D hand image?
For example, my input is a hand image with normal RGB camera, expected output is a 3D de…
-
Hi there, great work! Really appreciate that you open source the code so soon!
I have some questions about the diffusion and denoising process.
The image shown in the README is really impressiv…
-
Hello
I was wondering that how did you calculate the depth_scale = 5000 and how did you calculate max and min depth.
-
Hi there,
Could you please tell me what camera intrinsic are used for the pretrained models? I would like to see some depth results on a custom input in the form of a pointcloud, similar to #2. It …
-
I find that you use all three views of depth images for training in the \denseReg-master\data\nyu.py .
def loadAnnotation(self, is_trun=False):
'''is_trun:
True: to load 14 joi…
-
I have been trying to train a new ZoeDepth_N model on the NYUv2 dataset with the more efficient DPT_SwinV2_L_384 MiDaS backbone for real-time performance. However, it is not clear from the current doc…
-
Hi !
Great work, I am very interested to try it out !
I can't find how the text embedding is created, the nyu_class_embeddings_my_captions.pth in your code. please could you explain it
in de…
-
Hi:
I‘m trying to make some inferences using image capture from my own camera. However, I'm a little bit confuse about the cam_pose and vox_origin which are inputs when using NYUv2 dataset.
I t…
-
How is the prior_mean calculated?
Thanks~