Closed nkwangyh closed 7 years ago
Hi, Thank you very much for your interest!
I remember also having the same questions with you, until I found that Saurabh Gupta had made all pre-processed data available here: www.cs.berkeley.edu/~sgupta/eccv14/eccv14-data.tgz.
In general, all resources needed were obtained in this webpage: https://github.com/s-gupta/rcnn-depth
The pre-processing uses the 'instances', with proper handling of the '0' areas. They also cut the outer region of the images, which is the reason for the smaller resolution. The boundaries are recovered from the UCMs as ucm2(3:2:end, 3:2:end).
I hope this answers all of your concerns.
Your reply is so quickly and the answers you give exactly solve my questions. Thank you very much!
Glad it helped!
The README was updated accordingly. Thanks for pointing this issue out.
Hi, I'm new to this direction. And I have a few questions about how to benchmark on the NYUD-v2 dataset.