-
Hi! I wonder if you may share your dataset when training the SOTA NYUDv2 model, which you referred to as "20k unlabeled images" (but it has to be labeled since you need depth supervision during traini…
-
I try to run this piece of code to get an inference sample on google colab
```
!cd /content/VNL_Monocular_Depth_Prediction && python ./tools/test_any_images.py \
--dataroot /content/VNL_Monocu…
-
I try to run this piece of code to get a sample inference on google colab
```
!cd /content/VNL_Monocular_Depth_Prediction && python ./tools/test_any_images.py \
--dataroot /content/VNL_Monocul…
-
is script have?
Thanks you
ghost updated
3 years ago
-
I have appreciated your great work. You noted that depth value in NYUDv2 datasets should be normalized. I found that the maximum pixel value of raw depth image is very large(ie, 19576 of 000003.png in…
-
Hi,
Thank you for your great work for us !
Have you used multigrid as in the code to get the 52.4mIOU on the NYUD dataset?
-
Hi there, thank you for open-sourcing your amazing work.
I trained ResNet-50 single-task baseline for segmentation task using config file ["configs/nyud/resnet50/semseg.yml"](https://github.com/Sim…
-
Please could you let me know the hyperparams used to train the HRNet-48 model from your paper (both for the 45.7% mIoU and the ~49% mIoU scores)? I have tried really hard to train HRNet-48 on single t…
-
Hello,
thanks a lot for sharing code. Could you provide methods that allow us to prepare datasets to run learning (lst and hha files)?
This will make it easier to reproduce the results. Thank you in…
ghost updated
4 years ago
-
1. Hi, thank you for sharing the code. I tried to run the training code and apparently, there is only train annotation and test annotation. val_annotations.json is missing. Can you share the file? or …