Open pzz2011 opened 6 years ago
@twtygqyy I didn't find any info about 0-1 normalized in the code.... :-)
@twtygqyy another question here, the size of the generated .h5 file for 291 png(13M) is about 14G.
If I want to use 800 pngs(1000x800) to generate .h5 file => It will cause OOM。 In fact,when I use 300pngs(1000x800), the generate_train.m require > 128G memory => cause OOM again. => and it cause >50G .h5 file in my disk.
any advice? thanks.
@pzz2011 image = im2double(image(:, :, 1)); will do the trick. Regarding OOM issue, it is better to split the training set into multiple h5 file and modify the dataloader to load them one by one.
@pzz2011 the train patches are 0-1 normalized in https://github.com/twtygqyy/pytorch-vdsr/blob/master/data/generate_train.m