Open star4s opened 7 years ago
First you can try to use smaller batch size (like 16, 32).
Or
Just use small input images (like part of big images) to train the model.
You still can use big images as input when testing.
Note that using the part of images is not equal to resizing images small. Because the part of images still keep the scale variant, like detailed information.
Thank you for your Answer. I appreciate for your kindness everytime. I will try soon.
Hello. Long time no see. I tried using the 1080TI(11GB) for the Super Resolution. Size of my Images is 1120 1450 and the number of Images is 457. After I tried using the 1080TI(11GB) for the Super Resolution, I had same error message, "vl::impl::nnconv_forward_blas: getWorkspace [out of memory error] . dagnn.Conv/forward (line 11) methods function outputs = forward(obj, inputs, params) if ~obj.hasBias, params{2} = [] ; end outputs{1} = vl_nnconv(... inputs{1}, params{1}, params{2}, ... 'pad', obj.pad, ... 'stride', obj.stride, ... 'dilate', obj.dilate, ... obj.opts{:}) ; end " In my case, the 654 664 size image is good to work in the Super resoultion. Last time, you suggest about "try net.conserveMemory=true; " I already tried, but it was fail, and I got same error message. I think the reason is the trainning big data. I used the option for trainning like bottom.
opts.train.batchSize = 128; %opts.train.numSubBatches = 1 ; opts.train.continue = true; opts.train.gpus = 1; opts.train.prefetch = false ; %opts.train.sync = false ; %opts.train.errorFunction = 'multiclass' ; opts.train.expDir = '/home/zzd/super-resolution/data/SRnet-v1-ycbcr-128' ; opts.train.learningRate = [1e-5ones(1,3) 1e-6ones(1,1)]; opts.train.weightDecay = 0.0005; opts.train.numEpochs = numel(opts.train.learningRate) ; opts.train.derOutputs = {'objective',1} ; [opts, ~] = vl_argparse(opts.train, varargin) ;
If I use opts.train.batchSize = 64; or 32 , Could I do super resolution for big size image, 1120 * 1450? In addition, I used 1,470,000 images for training Data.