-
Hi,
Can you also publish the code for preprocessing the SUNRGB dataset? I have trained your model on SUNRGB and now I want to test the model on my own images. Thanks.
-
Hello, thank you for sharing your code,
I run the training and test code according to the process. During the training, the lab_s is around 0.5. I don't use the KITTI GT during the training, so th…
-
ksnzh updated
5 years ago
-
`Traceback (most recent call last):
File "train.py", line 286, in
x_hat, tran_hat, atp_hat, dehaze21 = netG(input)
File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/mod…
-
Hi , i notice that the image and label are loaded use NTUDepth function , can you show the data form ?
since i want to use my own data to train .
and can you share the script to generate the the l…
-
When using the option `also_save_raw_predictions = True` option to output the inferred semantic segmentation maps, the resulting maps have different names than the original images. How can I fix that?…
-
How to use the code generate_blurred_dataset.m?
-
Please consider adding the __version__ attribute to the base module.
PEP0396
https://www.python.org/dev/peps/pep-0396/
I do not see the attribute:
>>> dir(open3d)
['Always', 'ColorMapOptmiz…
-
hi
In your monodepth, there are only two pre-trained model kitti / cityscapes.
I want to use monodepth to indoor images or near vision.
So, I would train my own dataset. However I saw `training o…
-
Hello,
The original dataset nyu_depth_v2_labeled.mat contains 26 scene categories. Your nyu_class_10_db.h5 dataset has only 10 scene categories. I want to know how you extracted these 10 scene catego…