twtygqyy / pytorch-vdsr

VDSR (CVPR2016) pytorch implementation
MIT License
428 stars 156 forks source link

why does the implementation not use data normalization / zero-center? #12

Open pzz2011 opened 6 years ago

twtygqyy commented 6 years ago

@pzz2011 the train patches are 0-1 normalized in https://github.com/twtygqyy/pytorch-vdsr/blob/master/data/generate_train.m

pzz2011 commented 6 years ago

@twtygqyy I didn't find any info about 0-1 normalized in the code.... :-)

pzz2011 commented 6 years ago

@twtygqyy another question here, the size of the generated .h5 file for 291 png(13M) is about 14G.

If I want to use 800 pngs(1000x800) to generate .h5 file => It will cause OOM。 In fact,when I use 300pngs(1000x800), the generate_train.m require > 128G memory => cause OOM again. => and it cause >50G .h5 file in my disk.

any advice? thanks.

twtygqyy commented 6 years ago

@pzz2011 image = im2double(image(:, :, 1)); will do the trick. Regarding OOM issue, it is better to split the training set into multiple h5 file and modify the dataloader to load them one by one.