Closed alhusseindev closed 4 years ago
Hi,
Did go through your code. Appears to me that layer L5 (LocallyConnected2D) has an argument (strides=2
) missing which is causing the error in shape while loading the weights of the pre-trained network.
Please make sure to use this as the layer L5 (as per the source code provided):
model.add(tf.keras.layers.LocallyConnected2D(filters=16, strides=2, kernel_size=(7,7), activation='relu', name='L5'))
A simpler workaround (and little easier way) of what you are trying to implement would be equivalent to these few lines of code:
pip install git+https://github.com/swghosh/DeepFace.git
from deepface import deepface
from tensorflow import keras # should well support TFv1.15+ as well as TFv2.x
base_model = deepface.create_deepface()
weights_path = deepface.get_weights() # will auto-download pre-trained weights
base_model.load_weights(weights_path)
base_model.summary()
Also, I'm hoping that you are looking forward to use the pre-trained model with a custom dataset (new classifier trained on images of new people), for which you can refer to this notebook (FineTuning DeepFace on a smaller dataset.ipynb) which can act like an end-to-end guide.
Hi Swarup, Thanks for your email.
I wanted to do 2 things:
1) I want to train my own classifier. 2) I was Also wondering if I can use the model with the pretrained weights you uploaded to recognize faces, without changing anything in it. (In this approach when I load the weights I get the error message “shapes are not the same”). I appreciate your help.
Thank you.
Best, Al
Plus, if you want to extract good enough features (deep feature extraction) that you could use as-is for face verification / face recognition tasks you can add the following extra code snippet (as a continuation to above snippet):
feature_model = keras.models.Model(inputs=base_model.input,
outputs=base_model.get_layer(reqd_layer).output, name='DeepFaceFeatures')
deepface_features = feature_model.predict(images) # adding your custom images as a stacked tensor of shape [N, 150, 150, 3]
which can be used to train a linear classifier of your choice (eg. Linear SVM, Neural Net Layer (Dense), Logistic Regression, etc.) for the task of your choice.
Hope it helps. Let me know if there are any additional queries.
Hello Swarup, Thanks for your message.
The time taken for recognition is about 30 secs, is there a way to decrease the time taken by the classifier.
Another question is, do I just use the weights provided, without the need for training.
Regarding localization, I wanted to know more about how to localize the face (face location) in an image, can this be achieved using this classifier.
Thank you.
@alhusseindev, the DeepFace network is a really bulky though because of the presence of three Locally Connected layers and two terminal Dense layers. You can reduce the recognition time using a GPU, if you have one available; otherwise I do agree that the model is quite computationally expensive to run even for inference.
As of now, DeepFace feature vectors (output of F7 layer, refer paper) can be treated as high quality face specific features. There isn't any workaround to skip training process, you'll need a linear classifier to train these features for a new face recognition classification task. If you're looking for something that requires no training and does face verification consider reading the last part of this comment for FaceNet. Probably, you should go through this, the package would help you verify 2 faces are of same person or not using similarity matching of face feature vectors, it uses the same weights from my repo when used with model='DeepFace'
.
Lastly, for the face localization task (detect faces, cropping them and aligning them) consider using dlib.get_frontal_face_detector
with dlib.get_face_chips
. More information and code is available in this issue.
load the weights
model.load_weights('/content/VGGFace2_DeepFace_weights_val-0.9034.h5')