-
Hi, thank you for your research. I want to run your code on my own computer. My OS is Ubuntu 14.04 and camera is kinect v1. Does your code fit this camera or must be used with Creative Senz3D?
-
1. How to get a depth of a scene? And can we get a matting graph with real distance but not fractions.
2. How Resides database generate their outdoor and indoor images.
@MintcakeDotCom
-
During training, how many images did you use as validation set?
Since I'm still coding the training framework, I'm working with the plain NYU Depth Dataset (795 Images in total, 639 for training an…
-
Hi,
I want to use your code to predict my own data.
It seems that there is only network structure in this code.
Is it a unfinished version now?
And where is the main.py then?
Pan
-
Hi,
I noticed that in your paper, Table 2, your WKDR is less than both WKDR_eq and WKDR_neq. But I think that, WKDR should be a weighted ave of them. How do you get the WKDR?
-
i am trying this paper in the caffe
i am using NYU2 raw depth dataset. about 12K~13K sampled depth image.
at the train task of you, did you use filtered image(cross bf or colorization) or not(raw de…
-
Can I estimate the real distance after prediction? Can I get like some depth information besides color?
Thank U!
-
When I compile the loaddataset.py,the erro occured. Call for help!
E:/python_work/small_norb-master/read_smallnorb/loaddataset.py:29: DeprecationWarning: The binary mode of fromstring is deprecated…
-
Hi ,I am trying to recreate your results on the NYU_V2 dataset,but the huber loss can't converge when training. I guess i need to make data augmentation, but i am unsure how to do it. Now i have 12k …
E-MHJ updated
6 years ago
-
Hi, thank you for release the code firstly, but when I train on the NYU dataset and run the script Train_FuseNet.py, I meet the error "RuntimeError: Need input.size[1] == 3 but got 1 instead."as follo…