happynear / FaceVerification

An Experimental Implementation of Face Verification, 96.8% on LFW.
447 stars 234 forks source link

Face Verification #42

Open sanjanajainsj opened 7 years ago

sanjanajainsj commented 7 years ago

Hello Sir, Thank you for the models and codes you have provided. We are planning to train a face verification model on the CASIA-WebFace dataset from scratch using the "mnist_siamese_train_test.prototxt" model you provided. I have gone through the papers you specified in the previous issues already. How are you generating the training pairs and also how are you labeling them? Thank you in advance.

happynear commented 7 years ago

Don't use that. I never got success in training the "mnist_siamese_train_test.prototxt" model.

I suggest you to download CASIA-Webface, clean the dataset with the list I provided, and train it using center-face (https://github.com/ydwen/caffe-face). If everything works fine, you will get ~99% on LFW.

sanjanajainsj commented 7 years ago

Hello Sir, Thank you for your reply. I have a few questions: 1) I have already downloaded the washed-up Casia-WebFace dataset. Which list should I be using to clean the dataset? 2) I was planning to train a Siamese model for face verification and the model from caffe-face (https://github.com/ydwen/caffe-face) seems to be a single-cnn model. If possible, could you suggest a Siamese network for face verification. 3) Also, I was going through one of the issues (https://github.com/happynear/FaceVerification/issues/22), where you provided the caffemodel, meanfile and deploy.prototxt. However, the deploy.prototxt file ends at the dropout layer, I am confused how do we get an output as we do not have any output layer like softmax, etc.

happynear commented 7 years ago
sanjanajainsj commented 7 years ago

Hello Sir, Thank you for your reply. I have a few questions regarding this:

  1. I tried to finetune your model with another dataset of unaligned images. I am using "CASIA_train_test.prototxt", "mnist_siamese_solver.prototxt" and the model you provided in https://github.com/happynear/FaceVerification/issues/22 . I tried with 100,000 iterations and my loss did not decrease since the start. I am getting a testing loss of 4.72074 and testing accuracy of 0.0072. I am labeling the dataset like this:

/home/.../dataset/subject1/img1 1 /home/.../dataset/subject1/img2 1 /home/.../dataset/subject2/img1 2 /home/.../dataset/subject2/img2 2 ...

Is this the correct way to label the dataset?

  1. I am planning to train the models for both your cnn and center-face from scratch. How should the labeling be done?
happynear commented 7 years ago

Your labels are good, both for CASIA_train_test.prototxt and center-face.

sanjanajainsj commented 7 years ago

Hello Sir, What could be the reason for the loss not decreasing and accuracy of 0.0072? I tried training Casia from scratch and I am getting similar results. I tried for 70k iterations and it had loss of 4.26 and accuracy of 0.0072.

baby313 commented 7 years ago

Hi Sir, I followed your suggestion and center-face. However, the memory cost is too much to train 256/batch. As a result, I change it to 128, but the center_loss did not decrease :( I intended to do face matching on phone and realize that center_face may be too heavy (Speed and Model Size) to do so. Do you have any suggestion for face matching on phone? DeepID2 is the best choice I have found currently and do you know other better options?

sidgan commented 7 years ago

@sanjanajainsj Could you provide a link to the cleaned dataset.

sanjanajainsj commented 7 years ago

@sidgan I found the cleaned dataset on this link: http://pan.baidu.com/s/1jIqBIcu and the password is eb7h

sidgan commented 7 years ago

For me this page does not translate to English, I tried a lot of online translation tools, and I don't know Chinese. Additionally, the Baidu installer only has Chinese. So, for me that isn't helpful either. Do you know of a workaround to this?

yao5461 commented 6 years ago

@sanjanajainsj Hi, the link has been invalid, any other accessible links? Thanks!