Closed racso10 closed 3 years ago
@racso10
Hi, thanks for your interest in our work. We replaced Origami using Dishes, because the latter one has a large baseline and richer texture for evaluation.
Actually we think the HCI dataset can be divided into training and test datasets in any way as you want, if the task is spatial or angular super-resolution. Because the official test dataset has no ground-truth disparity for light field depth estimation, but we do nor require the disparity map in our task.
I get it, thank you!
Can you share the HCI old dataset and Inria DLFD dataset?
@JXXabc
Hi, the Inria dataset can be found in http://clim.inria.fr/Datasets/InriaSynLF/index.html.
The HCI old dataset seems not available now. Maybe you can ask the authors of the paper titled Datasets and benchmarks for densely sampled 4d light fields.
Thank you very much。
Congratulations on your article being accepted by AAAI2020. I notice that the dataset using in this article is New HCI. But you didn't use training datasets and testing datasets strictly in accordance with their own partitions (e.g. Origami and dishes). Could you explain why you did that?
Thank you!