-
You write that "the depth boundary error is currently different from the paper". I see that the huge dbe_com can be ignored; however, there are noticeable differences in some other metrics as well. …
-
Hi guys
How long to pretrained the model based on ImageNet?
Hope to receive your answer.
Thank you.
-
In this paper, "Clockwork Convnets for Video Semantic Segmentation", The NYUDv2 dataset [6] collects short RGB-D clips and includes a segmentation benchmark with high-quality but temporally sparse pix…
s5248 updated
4 years ago
-
Thank you for releasing this code.
As other issues have asked, I know this can be trained on nyu and kitti however when applying on a different scene, can this algorithm still perform well?
…
-
Hi, in the paper, you mentioned there are 464 different indoor scenes and 249 of them are used for training. Could you let me know where I can get the training list of scenes? Also, could you clarify …
-
the website of https://goo.gl/hcUFMy is inaccessible
-
Hi,
There are two image resources of your dataset: Flickr and NYU Depth. However, I can't find the depth information of those images from NYU Depth. Am I missing anything? Thank you!
Meng-Jiun
-
Hi,
You are using 1389 NYU Depth images,but there are 2778 images in the downloaded floder 'nyu'.
Could you please tell me why and the meaning of this double quantity relationship?
Thank you!
-
Thanks for great work.
I am wondering, How to test on any image or video
thanks in advance
Regards
-
In the dataset_manager.py file you normalize the depth map differently for the PBRS dataset and the NYU-V2 datasets.
More precisely, in the PBRS dataset you divide the depth maps by 65535 whereas …
ghost updated
5 years ago