yuxiaoguo / VVNet

Implementation of View-volume network for semantic scene completion from a single depth image
MIT License
15 stars 3 forks source link

some questions about dataset #7

Closed NguyenTriTrinh closed 4 years ago

NguyenTriTrinh commented 4 years ago

hi, thanks for your work , and i have downloaded the datasets from your link ,then i got two zip folders,the 'depthbin_eva.zip' and the 'SUNCGtrain.zip',After unzip 'SUNCGtrain.zip' i got a SUNCG train dataset(it contains seven sub folders 'SUNCGtrain_1_500' 'SUNCGtrain_501_1000'...'SUNCG_5001_7000'),all of them are used for training? it generates more than 100G. The depthbin folder has 'SUNCGtest_49700_49884',and in the eval folder i see the same name file,so are they the same? i cannot clearly understand which one is test set and which one is eval set. hope for your reply, thank you!

yuxiaoguo commented 4 years ago

I think you are confused about the split of train/val/test samples.

All samples in SUNCGTrain.zip are used for training. In our current implementation, it will generate about 170G data. Most of the saving cost is on normal maps used in our work. A trade-off is to generate the normal map run-time (re-implement the normal generation from numpy to tensorflow) during training (you may need to do some modifications while writing/reading scripts). For val/test, they are actual the same.

NguyenTriTrinh commented 4 years ago

thank you for your detailed response, i get it now