Closed ronny3050 closed 5 years ago
Here you go: train_classifier.m.zip.
To use the code, start with preparing an imdb structure containing the data you want to use (see: https://github.com/vlfeat/matconvnet/tree/master/examples/mnist), then add a field called imdb.images.features containing the features extracted from an intermediate layer (we use the second before last, i.e., end-2). Optionally, you can adda field imdb.images.augmented_features which contains features from face images wearing eyeglasses.
Good luck!
@mahmoods01 is there a chance you could elaborate a little more on how to
add a field called imdb.images.features containing the features extracted from an intermediate layer
I've assembled images that I would like to train the classifier on into directories and am able to setup the imbd, but I'm having trouble working out what that means.
Really appreciate all your help and work :)
@mahmoods01 (and @ronny3050 in case you managed to work it out), to give a little more context to that question. At this point, I'm fairly sure I understand the process for reading the imdb
data in. I've based that on examples I found here and the one that was recommended above.
As a starting point, I would like to train a network on a tiny data set of 21 images per person. The images are in sub folders called zeke
and joanne
respectively and my function to read the data into an imdb struct is as follows:
% --------------------------------------------------------------------
function imdb = getImdb()
% --------------------------------------------------------------------
% Initialize the imdb structure (image database).
% The sets, and number of samples per label in each set
sets = {'train', 'val'} ;
numSamples = [18, 3] ;
% The name of the subject dirs.
subjects = {'zeke', 'joanne'};
num_subjects = size(subjects, 2);
dataDir = "data/zeke-joanne/";
% Preallocate memory
totalSamples = 42 ; % 2*21 (21 images per subject)
labels = zeros(totalSamples, 1) ;
set = ones(totalSamples, 1) ;
% Read in images and labels.
sample = 1;
for si = 1:num_subjects
% Get all the images for the subject.
subjectDir = strcat(dataDir, subjects{si});
imageSet = dir(fullfile(subjectDir,'*.png'));
% Put the subject's images away.
for k = 1:numel(imageSet)
F = fullfile(subjectDir,imageSet(k).name);
images(:,:,:,sample) = imread(F);
labels(sample) = si;
set(sample) = si;
sample = sample + 1;
end
end
% Show some random example images
figure(2) ;
montage(images(:,:,:,randperm(totalSamples, 2))) ;
title('Example images') ;
% Load in the net.
loaded = load("models/vgg143-recognition-nn.mat");
net = loaded.net;
% add a field called imdb.images.features containing the features extracted
% from an intermediate layer (we use the second before last, i.e., end-2).
imdb.images.features = net.layers(end - 2);
% Store results in the imdb struct
imdb.images.data = images ;
imdb.images.labels = labels ;
imdb.images.set = set ;
I'm getting trouble later with the imdb.images.features
though. When I try and run the train_classifier.m
file that was provided, I get an indexing error when the code tries to extract the features. I'm assuming that's because I'm doing the feature extraction wrong, but would love to be wrong :)
Do either of you have any advise on getting this working? Thank so much in advance for the help :)
I believe net.layers contains names of all the layers. Perhaps print that out and ensure that you're extracting features from the correct layer.
Hi, Zeke.
Assuming that you're using the OpenFace DNN, the following could help you extract the features:
function features = extract_features(opts, openface_net, data)
n = size(data, 4);
features = single(zeros(1, 1, opts.features_dim, n));
load(opts.align_info_file, 'align_info');
im_size = [size(data,1) size(data,2)];
ims = single(zeros(im_size(1), im_size(2), 3, opts.batch_size));
for i_im = 1:opts.batch_size:n
for j_im = 1:opts.batch_size
if i_im+j_im-1>n
% if we reached the last batch
j_im = j_im - 1;
ims = ims(:,:,:,1:j_im);
break;
end
ims(:,:,:,j_im) = data(:,:,:,i_im+j_im-1)/255.;
end
ims_aligned = openface_align(ims, align_info);
openface_net.eval({'in_global', ims_aligned});
output = openface_net.getVar({'L26_eucnormalize_out'});
output = permute(output.value, [1 3 2 4]);
features(:, :, :, i_im:i_im+j_im-1) = single(output);
end
end
Make sure to call the function as follows: imdb.images.features = extract_features( opts, openface_net, imdb.images.data );
Hello, thank you so much for the repo! Could you please share the code for fine-tuning vgg face on a custom face dataset for training the FC layer?