happynear / FaceVerification

An Experimental Implementation of Face Verification, 96.8% on LFW.
447 stars 234 forks source link

What is the most proper for face verification #22

Open cheer37 opened 8 years ago

cheer37 commented 8 years ago

Hi feng wang. I tracked all your research for face verification. This repo looks some messy as you said, but makes we can see your efforts all and so helpful as well. You must took trouble and superb experience so much through those days. First of all, i appreciate to your providing of refined CASIA database, and also to makers.

I want ask a question, what was the best method for face verification among siamese, triplet, joint baysian in aspect of accuracy? which method did you select? How much accuracy did you achieve at most on lfw so far?

questions about models.

2 batch size seems not good, loss would very fluctuate. Did you align order of the dataset in training for two pics from same person makes pair? like below assume right first_person_1 first_person_2 second_person_1 second_person_2 assume wrong first_person_1 second_person_1 in this case, loss is for other one, so it should be increased, not for optimization.

in this case, which loss will caffe decrease for optimization? finally, I contacted xiang wu you referred and tested his model, but accuracy on lfw does not arrive at the one he reported in his repo. just 93% for A_model on lfw without mean image. i know this question is not for you, but if you have any information about that, i want to ask it also. Thanks for your helpful reply in advance.

cheer37 commented 8 years ago

@feng wang I am training siamese net using contrastive loss with casia model, but it diverge in sgd mode. Would you tell me what's wrong? My proto follows

name: "Ourface_net"
layer {
  name: "ourface"
  type: "ImageData"
  top: "data"
  top: "sim"
  include {
    phase: TRAIN
  }
  transform_param {
    mean_file: "../../examples/face_part_siamese/mean.binaryproto"
  }
  image_data_param {
    root_folder: "F:/DB/Face_DatabaseJPG/new_named/"
    source: "../../examples/face_part_siamese/small/train_file1.txt"
    new_height: 128
    new_width: 128
    batch_size: 5
    is_color: false
  }
}
layer {
  name: "ourface"
  type: "ImageData"
  top: "data_p"
  top: "sim_p"
  include {
    phase: TRAIN
  }
  transform_param {
    mean_file: "../../examples/face_part_siamese/mean.binaryproto"
  }
  image_data_param {
    root_folder: "F:/DB/Face_DatabaseJPG/new_named/"
    source: "../../examples/face_part_siamese/small/train_file2.txt"
    new_height: 128
    new_width: 128
    batch_size: 5
    is_color: false
  }
}
layer {
  name: "ourface"
  type: "ImageData"
  top: "data"
  top: "sim"
  include {
    phase: TEST
  }
  transform_param {
    mean_file: "../../examples/face_part_siamese/mean.binaryproto"
  }
  image_data_param {
    root_folder: "F:/DB/Face_DatabaseJPG/new_named/"
    source: "../../examples/face_part_siamese/small/valid_file1.txt"
    new_height: 128
    new_width: 128
    batch_size: 5
    is_color: false
  }
}
layer {
  name: "ourface"
  type: "ImageData"
  top: "data_p"
  top: "sim_p"
  include {
    phase: TEST
  }
  transform_param {
    mean_file: "../../examples/face_part_siamese/mean.binaryproto"
  }
  image_data_param {
    root_folder: "F:/DB/Face_DatabaseJPG/new_named/"
    source: "../../examples/face_part_siamese/small/valid_file2.txt"
    new_height: 128
    new_width: 128
    batch_size: 5
    is_color: false
  }
}
layer {
  bottom: "data"
  top: "conv1"
  name: "conv1"
  type: "Convolution"
  param { name: "conv1_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv1_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv1"
  top: "conv1"
  name: "relu1"
  type: "ReLU"
}
layer {
  bottom: "conv1"
  top: "conv1_1"
  name: "conv1_1"
  type: "Convolution"
  param { name: "conv1_1_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv1_1_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv1_1"
  top: "conv1_1"
  name: "relu1_1"
  type: "ReLU"
}
layer {
  name: "pool1"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
  bottom: "conv1_1"
  top: "pool1"
}
layer {
  bottom: "pool1"
  top: "conv2"
  name: "conv2"
  type: "Convolution"
  param { name: "conv2_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv2_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv2"
  top: "conv2"
  name: "relu2"
  type: "ReLU"
}
layer {
  bottom: "conv2"
  top: "conv2_2"
  name: "conv2_2"
  type: "Convolution"
  param { name: "conv2_2_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv2_2_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv2_2"
  top: "conv2_2"
  name: "relu2_2"
  type: "ReLU"
}
layer {
  name: "pool2"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
  bottom: "conv2_2"
  top: "pool2"
}
layer {
  bottom: "pool2"
  top: "conv3"
  name: "conv3"
  type: "Convolution"
  param { name: "conv3_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv3_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 96
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv3"
  top: "conv3"
  name: "relu3"
  type: "ReLU"
}
layer {
  bottom: "conv3"
  top: "conv3_2"
  name: "conv3_2"
  type: "Convolution"
  param { name: "conv3_2_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv3_2_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 192
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv3_2"
  top: "conv3_2"
  name: "relu3_2"
  type: "ReLU"
}
layer {
  name: "pool3"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
  bottom: "conv3_2"
  top: "pool3"
}
layer {
  bottom: "pool3"
  top: "conv4"
  name: "conv4"
  type: "Convolution"
  param { name: "conv4_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv4_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv4"
  top: "conv4"
  name: "relu4"
  type: "ReLU"
}
layer {
  bottom: "conv4"
  top: "conv4_2"
  name: "conv4_2"
  type: "Convolution"
  param { name: "conv4_2_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv4_2_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv4_2"
  top: "conv4_2"
  name: "relu4_2"
  type: "ReLU"
}
layer {
  name: "pool4"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
  bottom: "conv4_2"
  top: "pool4"
}

layer {
  bottom: "pool4"
  top: "conv5"
  name: "conv5"
  type: "Convolution"
  param { name: "conv5_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv5_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 160
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv5"
  top: "conv5"
  name: "relu5_c"
  type: "ReLU"
}
layer {
  bottom: "conv5"
  top: "conv5_2"
  name: "conv5_2"
  type: "Convolution"
  param { name: "conv5_2_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv5_2_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 320
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv5_2"
  top: "conv5_2"
  name: "relu5_2"
  type: "ReLU"
}
layer {
  name: "pool5"
  type: "Pooling"
  pooling_param {
    pool: AVE
    kernel_size: 8
    stride: 1
  }
  bottom: "conv5_2"
  top: "pool5"
}
layer {
  bottom: "pool5"
  top: "pool5"
  name: "drop2"
  type: "Dropout"
  dropout_param {
    dropout_ratio: 0.4
  }
}
layer {
  bottom: "pool5"
  top: "ip3"
  name: "ip3"
  type: "InnerProduct"
  param { name: "ip3_w" lr_mult: 1 decay_mult: 1 }
  param { name: "ip3_b" lr_mult: 2 decay_mult: 0 }
  inner_product_param {
    num_output: 256
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "data_p"
  top: "conv1_p"
  name: "conv1_p"
  type: "Convolution"
  param { name: "conv1_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv1_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv1_p"
  top: "conv1_p"
  name: "relu1_p"
  type: "ReLU"
}
layer {
  bottom: "conv1_p"
  top: "conv1_1_p"
  name: "conv1_1_p"
  type: "Convolution"
  param { name: "conv1_1_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv1_1_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv1_1_p"
  top: "conv1_1_p"
  name: "relu1_1_p"
  type: "ReLU"
}
layer {
  name: "pool1_p"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
  bottom: "conv1_1_p"
  top: "pool1_p"
}
layer {
  bottom: "pool1_p"
  top: "conv2_p"
  name: "conv2_p"
  type: "Convolution"
  param { name: "conv2_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv2_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv2_p"
  top: "conv2_p"
  name: "relu2_p"
  type: "ReLU"
}
layer {
  bottom: "conv2_p"
  top: "conv2_2_p"
  name: "conv2_2_p"
  type: "Convolution"
  param { name: "conv2_2_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv2_2_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv2_2_p"
  top: "conv2_2_p"
  name: "relu2_2_p"
  type: "ReLU"
}
layer {
  name: "pool2_p"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
  bottom: "conv2_2_p"
  top: "pool2_p"
}
layer {
  bottom: "pool2_p"
  top: "conv3_p"
  name: "conv3_p"
  type: "Convolution"
  param { name: "conv3_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv3_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 96
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv3_p"
  top: "conv3_p"
  name: "relu3_p"
  type: "ReLU"
}
layer {
  bottom: "conv3_p"
  top: "conv3_2_p"
  name: "conv3_2_p"
  type: "Convolution"
  param { name: "conv3_2_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv3_2_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 192
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv3_2_p"
  top: "conv3_2_p"
  name: "relu3_2_p"
  type: "ReLU"
}
layer {
  name: "pool3_p"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
  bottom: "conv3_2_p"
  top: "pool3_p"
}
layer {
  bottom: "pool3_p"
  top: "conv4_p"
  name: "conv4_p"
  type: "Convolution"
  param { name: "conv4_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv4_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv4_p"
  top: "conv4_p"
  name: "relu4_p"
  type: "ReLU"
}
layer {
  bottom: "conv4_p"
  top: "conv4_2_p"
  name: "conv4_2_p"
  type: "Convolution"
  param { name: "conv4_2_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv4_2_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv4_2_p"
  top: "conv4_2_p"
  name: "relu4_2_p"
  type: "ReLU"
}
layer {
  name: "pool4_p"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
  bottom: "conv4_2_p"
  top: "pool4_p"
}

layer {
  bottom: "pool4_p"
  top: "conv5_p"
  name: "conv5_p"
  type: "Convolution"
  param { name: "conv5_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv5_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 160
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv5_p"
  top: "conv5_p"
  name: "relu5_c_p"
  type: "ReLU"
}
layer {
  bottom: "conv5_p"
  top: "conv5_2_p"
  name: "conv5_2_p"
  type: "Convolution"
  param { name: "conv5_2_w" lr_mult: 1 decay_mult: 1 }
  param { name: "conv5_2_b" lr_mult: 2 decay_mult: 0 }
  convolution_param {
    num_output: 320
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv5_2_p"
  top: "conv5_2_p"
  name: "relu5_2_p"
  type: "ReLU"
}
layer {
  name: "pool5_p"
  type: "Pooling"
  pooling_param {
    pool: AVE
    kernel_size: 8
    stride: 1
  }
  bottom: "conv5_2_p"
  top: "pool5_p"
}
layer {
  bottom: "pool5_p"
  top: "pool5_p"
  name: "drop2_p"
  type: "Dropout"
  dropout_param {
    dropout_ratio: 0.4
  }
}
layer {
  bottom: "pool5_p"
  top: "ip3_p"
  name: "ip3_p"
  type: "InnerProduct"
  param { name: "ip3_w" lr_mult: 1 decay_mult: 1 }
  param { name: "ip3_b" lr_mult: 2 decay_mult: 0 }
  inner_product_param {
    num_output: 256
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
    name: "loss"
    type: "ContrastiveLoss"
    contrastive_loss_param {
        margin: 1.0
    }
    bottom: "ip3"
    bottom: "ip3_p"
    bottom: "sim"
    bottom: "sim_p"
    top: "loss"
}

and also i want see your triplet proto if you can. Thanks

happynear commented 8 years ago

casia_demo is a demo. It takes two images as input and output their distance.

I am doing other works now, so I haven't tried triplet now. BTW, here is a model with accuracy more than 98% : https://github.com/AlfredXiangWu/face_verification_experiment . You may try his solution.

happynear commented 8 years ago

Oh, you have already known it.

Maybe your alignment toolkit is not very good now. Please try some CUHK's works.

cheer37 commented 8 years ago

Thank you feng wang. Please, review my network, what's reason that diverge?

And also, i couldn't find any trained model from CUHK but only paper, They didn't public it? Xiang Wu didn't reveal his face alignment method, do you know about it? and his normalization factor?

cheer37 commented 8 years ago

@Feng Wang I always get the contrastive loss as a 1.#QNAN when training with SGD. I think, you can tell me what reason is. Thanks.

On Sun, Mar 13, 2016 at 7:37 PM, Feng Wang notifications@github.com wrote:

Oh, you have already known it.

Maybe your alignment toolkit is not very good now. Please try some CUHK's works.

— Reply to this email directly or view it on GitHub https://github.com/happynear/FaceVerification/issues/22#issuecomment-195939981 .

happynear commented 8 years ago

Set the loss_weight of contrastive to 1e-5 at the beginning of train procedure. You may increase it when the softmax converges, but it should not be over 1e-3.

cheer37 commented 8 years ago

Thank you Feng Wang. I will try to do it.

On Tue, Mar 15, 2016 at 11:59 PM, Feng Wang notifications@github.com wrote:

Set the loss_weight of contrastive to 1e-5 at the beginning of train procedure. You may increase it when the softmax converges, but it should not be over 1e-3.

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub:

https://github.com/happynear/FaceVerification/issues/22#issuecomment-196895850

cheer37 commented 8 years ago

There is not any effect about margin? I set it as a 1.

On Wed, Mar 16, 2016 at 12:21 AM, Farik John inspirenature37@gmail.com wrote:

Thank you Feng Wang. I will try to do it.

On Tue, Mar 15, 2016 at 11:59 PM, Feng Wang notifications@github.com wrote:

Set the loss_weight of contrastive to 1e-5 at the beginning of train procedure. You may increase it when the softmax converges, but it should not be over 1e-3.

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub:

https://github.com/happynear/FaceVerification/issues/22#issuecomment-196895850

cheer37 commented 8 years ago

Feng wang. I set the loss_weight as a 1e-5, but problem doesn't go away. I doubt there is any zero divided in source code, so i saw through contrastive loss layer. But there is no any division to calculate the loss, but there is in calculation of diff in backward step. I am waiting for you helpful instruction. Thanks.

On Wed, Mar 16, 2016 at 12:23 AM, Farik John inspirenature37@gmail.com wrote:

There is not any effect about margin? I set it as a 1.

On Wed, Mar 16, 2016 at 12:21 AM, Farik John inspirenature37@gmail.com wrote:

Thank you Feng Wang. I will try to do it.

On Tue, Mar 15, 2016 at 11:59 PM, Feng Wang notifications@github.com wrote:

Set the loss_weight of contrastive to 1e-5 at the beginning of train procedure. You may increase it when the softmax converges, but it should not be over 1e-3.

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub:

https://github.com/happynear/FaceVerification/issues/22#issuecomment-196895850

cheer37 commented 8 years ago

If you don't mind, would you see my prototxt? Thanks.

On Wed, Mar 16, 2016 at 12:37 AM, Farik John inspirenature37@gmail.com wrote:

Feng wang. I set the loss_weight as a 1e-5, but problem doesn't go away. I doubt there is any zero divided in source code, so i saw through contrastive loss layer. But there is no any division to calculate the loss, but there is in calculation of diff in backward step. I am waiting for you helpful instruction. Thanks.

On Wed, Mar 16, 2016 at 12:23 AM, Farik John inspirenature37@gmail.com wrote:

There is not any effect about margin? I set it as a 1.

On Wed, Mar 16, 2016 at 12:21 AM, Farik John inspirenature37@gmail.com wrote:

Thank you Feng Wang. I will try to do it.

On Tue, Mar 15, 2016 at 11:59 PM, Feng Wang notifications@github.com wrote:

Set the loss_weight of contrastive to 1e-5 at the beginning of train procedure. You may increase it when the softmax converges, but it should not be over 1e-3.

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub:

https://github.com/happynear/FaceVerification/issues/22#issuecomment-196895850

cheer37 commented 8 years ago

Feng Wang. I am waiting for your help.

On Wed, Mar 16, 2016 at 12:40 AM, Farik John inspirenature37@gmail.com wrote:

If you don't mind, would you see my prototxt? Thanks.

On Wed, Mar 16, 2016 at 12:37 AM, Farik John inspirenature37@gmail.com wrote:

Feng wang. I set the loss_weight as a 1e-5, but problem doesn't go away. I doubt there is any zero divided in source code, so i saw through contrastive loss layer. But there is no any division to calculate the loss, but there is in calculation of diff in backward step. I am waiting for you helpful instruction. Thanks.

On Wed, Mar 16, 2016 at 12:23 AM, Farik John inspirenature37@gmail.com wrote:

There is not any effect about margin? I set it as a 1.

On Wed, Mar 16, 2016 at 12:21 AM, Farik John inspirenature37@gmail.com wrote:

Thank you Feng Wang. I will try to do it.

On Tue, Mar 15, 2016 at 11:59 PM, Feng Wang notifications@github.com wrote:

Set the loss_weight of contrastive to 1e-5 at the beginning of train procedure. You may increase it when the softmax converges, but it should not be over 1e-3.

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub:

https://github.com/happynear/FaceVerification/issues/22#issuecomment-196895850

happynear commented 8 years ago

I haven't got success in training siamese, so actually I can't give you more suggestion. The 1e-5 is heard from a friend of mine.

cheer37 commented 8 years ago

Then how much accuracy did you achieve at most so far with what net structure?

On Wed, Mar 16, 2016 at 8:45 PM, Feng Wang notifications@github.com wrote:

I haven't got success in training siamese, so actually I can't give you more suggestion. The 1e-5 is heard from a friend of mine.

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/happynear/FaceVerification/issues/22#issuecomment-197302671

cheer37 commented 8 years ago

on lfw

On Wed, Mar 16, 2016 at 8:50 PM, Farik John inspirenature37@gmail.com wrote:

Then how much accuracy did you achieve at most so far with what net structure?

On Wed, Mar 16, 2016 at 8:45 PM, Feng Wang notifications@github.com wrote:

I haven't got success in training siamese, so actually I can't give you more suggestion. The 1e-5 is heard from a friend of mine.

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/happynear/FaceVerification/issues/22#issuecomment-197302671

cheer37 commented 8 years ago

hi feng wang. I saw you are willing to post your model because you feel make it private no longer by others if you provide your model, it must be so helpful and i will be so appreciated.

On Wed, Mar 16, 2016 at 8:51 PM, Farik John inspirenature37@gmail.com wrote:

on lfw

On Wed, Mar 16, 2016 at 8:50 PM, Farik John inspirenature37@gmail.com wrote:

Then how much accuracy did you achieve at most so far with what net structure?

On Wed, Mar 16, 2016 at 8:45 PM, Feng Wang notifications@github.com wrote:

I haven't got success in training siamese, so actually I can't give you more suggestion. The 1e-5 is heard from a friend of mine.

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/happynear/FaceVerification/issues/22#issuecomment-197302671

happynear commented 8 years ago

I would like to make my 96.8 model public now. I upload it to http://pan.baidu.com/s/1qXhNOZE . You may try it.

cheer37 commented 8 years ago

Thank Feng Wang. But there is not deploy.prototxt and mean image there.so how can i test it?

On Wed, Mar 16, 2016 at 10:23 PM, Feng Wang notifications@github.com wrote:

I would like to make my 96.8 model public now. I upload it to http://pan.baidu.com/s/1qXhNOZE . You may try it.

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/happynear/FaceVerification/issues/22#issuecomment-197352903

happynear commented 8 years ago

The deploy file is in https://github.com/happynear/FaceVerification/blob/master/caffe_proto/CASIA_deploy.prototxt . Mean file is in http://pan.baidu.com/s/1eQYEJU6 .

cheer37 commented 8 years ago

Hi Feng Wang. I tested you model, but accuracy does not high. what's the culprit? i wonder the alignment of input face image, and also normalizer(0~255 or 0~1) And also, i found there is no pad in first conv layer in casia_deploy.prototxt. does it work? Thank you.

On Wed, Mar 16, 2016 at 10:40 PM, Farik John inspirenature37@gmail.com wrote:

Thank you so much, Feng Wang. You are always kind teacher for me. I will try your model. Thanks again.

On Wed, Mar 16, 2016 at 10:38 PM, Feng Wang notifications@github.com wrote:

The deploy file is in https://github.com/happynear/FaceVerification/blob/master/caffe_proto/CASIA_deploy.prototxt . Mean file is in http://pan.baidu.com/s/1eQYEJU6 .

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/happynear/FaceVerification/issues/22#issuecomment-197358796

cheer37 commented 8 years ago

hi fengwang, I already defeat xiang wu's model and arrive 98.2% with only softmax on lfw. now. i am trying triplet to boost performance. Would you help me? I will appreciate you provide your triplet work repo. Thanks.

On Wed, Mar 16, 2016 at 10:58 PM, Farik John inspirenature37@gmail.com wrote:

Hi Feng Wang. I tested you model, but accuracy does not high. what's the culprit? i wonder the alignment of input face image, and also normalizer(0~255 or 0~1) And also, i found there is no pad in first conv layer in casia_deploy.prototxt. does it work? Thank you.

On Wed, Mar 16, 2016 at 10:40 PM, Farik John inspirenature37@gmail.com wrote:

Thank you so much, Feng Wang. You are always kind teacher for me. I will try your model. Thanks again.

On Wed, Mar 16, 2016 at 10:38 PM, Feng Wang notifications@github.com wrote:

The deploy file is in https://github.com/happynear/FaceVerification/blob/master/caffe_proto/CASIA_deploy.prototxt . Mean file is in http://pan.baidu.com/s/1eQYEJU6 .

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/happynear/FaceVerification/issues/22#issuecomment-197358796

happygao commented 8 years ago

@cheer37 你好,请问您现在用NN的方法做face vertification吗?

happygao commented 8 years ago

@cheer37 Have you used the siamese network to face vertification? you used this method by the paper?

qiufan commented 8 years ago

@cheer37 hello, I am trying to get a good result on lfw. I am impressed by your result. Will you share some experience to help me? I will be very grateful!!!