-
Hello,
Thank you very much for your work; it truly produces spectacular depth images.
I am trying to generate the point cloud and wondering if I have done it correctly. I have obtained the follo…
-
I was trying to apply this model to my own data and not getting good results. I ran the NYUv2 dataset through my code, and the results seem to be in line with those reported in the ViT-Lens paper.
…
-
Hi,
The code output masks and plane parameters when evaluate NYU dataset. If I want to get point cloud data (xyz+rgb) from a plane such as the ground, do I need to calculate the point cloud based o…
-
Hi
Based on [this](https://github.com/LiheYoung/Depth-Anything/tree/main/metric_depth) document
I do every steps
But i don't understand this part of document:
"Please follow [ZoeDepth](https:…
-
Hello! Thanks for sharing your code! And I have successfully run it. But when I ran the sc_depth v2, I couldn't get the corresponding results in the paper, it has a gap. And I also ran the provided mo…
-
I'd like to share some code.
```matlab
nyufile = fullfile('dataset/', 'nyu_depth_v2_labeled.mat');
outDir = fullfile('output/', 'sgupta', 'datasets', 'nyud2', 'datacopy');
mkdir(fullfile(outDir,…
-
https://github.com/kskin/WaterGAN/issues/3
The code seems to fail without depth data(*.mat).
-
i read the paper(MuTr: Multi-Stage Transformer for Hand Pose Estimation from Full-Scene Depth Image), and got this code, but i found it difficult to use the code to NYU/icvl dataset, may i ask if ther…
-
tensorrt 6.0.18 ubuntu 18.04 python 3.7
I want to convert model ( https://github.com/shariqfarooq123/AdaBins)
import torch
from torch2trt import torch2trt
from models import UnetAdaptiveBins
imp…
-
Hello, @poier , George.
Thanks for sharing.
I wonder whether this model can be fit to a 2D hand image?
For example, my input is a hand image with normal RGB camera, expected output is a 3D de…