vlfeat / matconvnet

MatConvNet: CNNs for MATLAB
Other
1.4k stars 753 forks source link

How to Fine tune on matconvnet #1077

Open linyukotw opened 6 years ago

linyukotw commented 6 years ago

Hello every master:

I use pre-train model Vgg19 to train my data and got a good accuracy.

Now i want to fine tune , fix Vgg19 model's layers weight for my dataset and store the new weight

but there is no any function of fix weight on matlab

In Keras, just add : layer.trainable = True and can keep fix weight on model

My question is : What function on matconvnet will keep fix weight and got a new weight at Vgg19 for my dataset?

Here is my code:

rootFolder = 'D:\work'; categories = {'01' , '02' , '03' , '04' ,'05', '06', '07', '08', '09', '10', '11', '12', '13', '14', '15', '16', '17'}; imds = imageDatastore(fullfile(rootFolder, categories),'LabelSource', 'foldernames');

trainNumFiles = 19787; [trainDigitData,valDigitData] = splitEachLabel(imds,0.9,'randomize');

net= vgg19 Layers2 = net.Layers for i = 1 : 45 [layers3(i,1)] = Layers2(i,1); end layers3(46,1) = fullyConnectedLayer(17); layers3(47,1) = softmaxLayer(); layers3(48,1) = classificationLayer();

options = trainingOptions('sgdm',... 'InitialLearnRate',0.001,... 'MaxEpochs',30,... 'MiniBatchSize',miniBatchSize);

net = trainNetwork(trainDigitData,layers3,options);

predictedLabels = classify(y,valDigitData); valLabels = valDigitData.Labels;

accuracy = sum(predictedLabels == valLabels)/numel(valLabels)

Thanks for your answer

debvratV commented 5 years ago

Hi,

I have a similar question. I have a model, lets say ABCnet, which converges after say 50 epochs. Now I want to change/add a couple of layers in that model. So ideally, for layers which remain unchanged, I'd want to use the same (trained) weights. So that the processing is faster. And only for the new/modified layers, I'd want a fresh new weight calculation.

How can I implement this? I'm not really sure if opts.continue=true would help achieve this. I understand that opts.continue=true takes information from the last epoch. And if I add a new layer in the model, then it would mismatch the weights.

Is opts.continue=false the only way to go?

Thanks,