-
Hello,
First of all, I would like to say thank you for your work.
I would like to evaluate the MiDas v21 model on NYU Depth v2 dataset.
Based on the link below, the evaluation results using the…
-
The proposed ACMNet is great. I have a question: whether the proposed method can generalized across different dataset? e.g. nyu v2 and kitti? Whether we should retrain the model when testing? And rece…
-
Thank you for releasing the code of this great work. Can you tell me how did you get your training data of NYU? I don't think it's from the official website, since there are only 1449 labeled RGBD pai…
-
Hi:
I‘m trying to make some inferences using image capture from my own camera. However, I'm a little bit confuse about the cam_pose and vox_origin which are inputs when using NYUv2 dataset.
I t…
-
Link to another project: **DPT (Dense Prediction Transformers)** - State of the art Semantic-segmentation and Monocular depth estimation network
* Top-1 accuracy on Pascal-Context Semantic segmenta…
-
Hi,
I tried to implement exact same thin you have done, I downloaded the data set of NYU called bathrooms_part1.zip and nyu_depth_v2_labeled.mat, then gave both data to the make_dataset.py and I ha…
-
Hello;
Thanks for the wonderful paper and open-source project.
I noticed that not all files in the NYU Depth v2 (from sync.zip) were put in the training file list "nyudepthv2_train_files_wit…
-
Hello! Thanks for sharing your code! And I have successfully run it. But when I ran the sc_depth v2, I couldn't get the corresponding results in the paper, it has a gap. And I also ran the provided mo…
-
Excuse me, I have download your repo, but there is not a guide, so I don't know how to use this repo, which file should I run first? And where should I put the NYU dataset?
-
Link to another project: **DPT (Dense Prediction Transformers)** - State of the art Semantic-segmentation and Monocular depth estimation network
* Top-1 accuracy on Pascal-Context Semantic segmenta…