-
Hi thank you for your work can you tell me which is the config you used for training the network on nyudv2 (batch ecc..). i would like to know the exact transformation that you have done to the data d…
-
Hi, thank you for putting this code together.
I am testing it out and trying to understand if I am doing something wrong. Here are the results I've gotten which is quite flickery compared to the or…
-
@laughtervv Hello, I am trying to run the DepthAwareCNN code and I can not find dataset/lists/nyuv2/train.lst and dataset/lists/nyuv2/val.lst, Could you tell me where to find them? Thanks!
sj-li updated
2 years ago
-
Hi,
I am trying to run the DiverseDepth using your repo. I followed the instructions given in Readme file. The test data which I am trying to run the algorithm on is a simple folder, which contain…
-
why get the 3D point in this way instead of use the Camera params and depth data?
-
Thank you for your awesome work!!!
I would like to be clarified of a doubt I have. During the normalization procedure are you dividing the values of the depthmap by 255? Is this the only normalizati…
-
How to make Multi-Input-Single-Output segmentation?
Could you please provide some suggestions about how to handle multi-input such as depth and rgb, while to the maximum keeping the mmseg structure…
-
||link|
|----|---|
|paper| [Fully Convolutional Networks for Semantic Segmentation](https://openaccess.thecvf.com/content_cvpr_2015/html/Long_Fully_Convolutional_Networks_2015_CVPR_paper.html) |
|c…
-
Thank you for a great project,Whether it can be used for other visual tasks, video matting, video optical flow estimation stability
-
http://wangkaiwei.org/file/NYUDv2.zip Is there a problem with the server for this site? Why can't it be downloaded? Can you provide a downloadable zip?