davidsandberg / facenet

Face recognition using Tensorflow
MIT License
13.71k stars 4.8k forks source link

How to finetine pretrained model on my own dataset of 43 persons? #809

Open Victoria2333 opened 6 years ago

Victoria2333 commented 6 years ago

@davidsandberg I have been recently using this pretrained model to train my few-shot dataset of 43 persons, but i always got the error of classes not matching. maybe because my dataset size is 43 while MS-Celeb size is 10575. I'm not familiar with the saving and restoring models in TF, I want to restore part of the parameters and to finetune on my small dataset. So would you please give me some instructions about your finetune step?? Thank you~~~

speculaas commented 6 years ago

Hi Vic,

I had this question too.

There is answer in previous issue discussions: github.com/davidsandberg/facenet/issues/139

The above issue#139 covers how to restore. Then how to specify which layer to fine tune : (ps.1)

  1. see "train_softmax.py" , there is a line : "train_op = facenet.train(", and this is where you put in whatever layers you want to fine tune

  2. next, to specify whatever layers you want to fine tune : for example : ftune_vlist = [v for v in all_vars if v.name.startswith('InceptionResnetV1/Block8')]

  3. pass ftune_vlist to facenet.train

for more details : David's wiki : https://github.com/davidsandberg/facenet/wiki/Classifier-training-of-inception-resnet-v1

ps.1 : maybe you need to see tensorboard or some source code to see names for different layers

BR, JimmyYS

Victoria2333 commented 6 years ago

@speculaas, Hi , speculaas, I've been working on the influence of finetune for face recognition. Here is my code: all_vars = tf.trainable_variables() set_A_vars = [v for v in all_vars if v.name.startswith('InceptionResnetV1/Block8')] saver_set_A = tf.train.Saver(set_A_vars, max_to_keep=3) saver_set_A_and_B = tf.train.Saver(all_vars, max_to_keep=3)

Then I do the transfer learning by restoring the variables in set A: saver_set_A.restore(sess, pretrained_model)

And save trained model completely: save_variables_and_metagraph(sess, saver_set_A_and_B, summary_writer, model_dir, subdir, epoch)

I want to test if the finetune of higher layers may do good to accuracy, but my accuracy on small dataset is very low , about o.318, so I want to know about your progress about this??? Did you find the progress of finetuning higher layers??

anjiang2016 commented 6 years ago
  1. see "train_softmax.py" , find codes like below:

    Build a Graph that trains the model with one batch of examples and updates the model parameters

    train_op = facenet.train(total_loss, global_step, args.optimizer, 
        learning_rate, args.moving_average_decay, tf.global_variables(), args.log_histograms)
    #Create a saver
    saver = tf.train.Saver(tf.trainable_variables(), max_to_keep=3)
  2. replace it by below:

    fine_turn ftune_vlist variables

    all_vars = tf.trainable_variables()
    ftune_vlist = [v for v in all_vars if v.name.startswith('InceptionResnetV1/Block8')]
    train_op = facenet.train(total_loss, global_step, args.optimizer,
        learning_rate, args.moving_average_decay, ftune_vlist, args.log_histograms)
    #Create a saver
    saver = tf.train.Saver(ftune_vlist, max_to_keep=3)
  3. and then u will just tune the layer in ftun_vlist's values.

hsm4703 commented 5 years ago

fine_turn ftune_vlist variables after training performance is better ?

hsm4703 commented 5 years ago

i use the

  1. see "train_softmax.py" , find codes like below:

    Build a Graph that trains the model with one batch of examples and updates the model parameters

    train_op = facenet.train(total_loss, global_step, args.optimizer, learning_rate, args.moving_average_decay, tf.global_variables(), args.log_histograms)

    Create a saver

    saver = tf.train.Saver(tf.trainable_variables(), max_to_keep=3)

  2. replace it by below:

    fine_turn ftune_vlist variables

    all_vars = tf.trainable_variables() ftune_vlist = [v for v in all_vars if v.name.startswith('InceptionResnetV1/Block8')] train_op = facenet.train(total_loss, global_step, args.optimizer, learning_rate, args.moving_average_decay, ftune_vlist, args.log_histograms)

    Create a saver

    saver = tf.train.Saver(ftune_vlist, max_to_keep=3)

  3. and then u will just tune the layer in ftun_vlist's values.

i use code find Similar to the previous program

keshavshrikant commented 4 years ago
  1. see "train_softmax.py" , find codes like below:

    Build a Graph that trains the model with one batch of examples and updates the model parameters

    train_op = facenet.train(total_loss, global_step, args.optimizer, learning_rate, args.moving_average_decay, tf.global_variables(), args.log_histograms)

    Create a saver

    saver = tf.train.Saver(tf.trainable_variables(), max_to_keep=3)

  2. replace it by below:

    fine_turn ftune_vlist variables

    all_vars = tf.trainable_variables() ftune_vlist = [v for v in all_vars if v.name.startswith('InceptionResnetV1/Block8')] train_op = facenet.train(total_loss, global_step, args.optimizer, learning_rate, args.moving_average_decay, ftune_vlist, args.log_histograms)

    Create a saver

    saver = tf.train.Saver(ftune_vlist, max_to_keep=3)

  3. and then u will just tune the layer in ftun_vlist's values.

How do I restore the model for validation in this case?