Closed melgor closed 7 years ago
Thanks @melgor!
Hi @davidsandberg: Thanks for sharing this project and congratulation about new results! I am trying to reproduce the results using the MS-Celeb-1M dataset. I couldn't find the the following two hyper parameters documented your post, and can you share the parameter you've used to achieve the results.
--keep_probability --center_loss_alfa
Thanks!
Hi, @davidsandberg, I used your msceleb1m model and calculating_filtering_metrics.py to get the distance_to_center values on msceleb1m dataset.
I plotted the histogram of distance_to_center in order to get a better idea how keep_percentile can be set.
And then I found the histogram seems to be formed by two Gaussian models. The left one is centered around 0.6 and the right one is center around 1.0. Do you have any idea how to explain that?
Are the instances belong to the right Gaussian the noisy data? Or are they profile faces? If the answer is the former one, then setting percentile 75 might be risky(threshold=0.95, the probability that a sample belong to the right Gaussian is 98% at this position). If the answer is the latter one, then center_loss didn't make the model do well on profile faces.
@ugtony @davidsandberg in file calculating_filtering_metrics.py
, you calculate embeddings
features for each image. As I understand you do filtering images in training phase for MS-Celeb-1M. So which model you use in calculating_filtering_metrics.py
to calculate embeddings
features? May be model trained with Casia dataset
Hi @davidsandberg ,
I have this idea to use the embeddings in another model. I am building a VAE model to generate faces and I would like to use the embeddings of FaceNet to add a loss to the reconstruction of the decoder. I am thinking something like:
total_loss = tf.reduce_mean( (1-alpha)reconstructionL2 + alphaembeddingsL2)
Do you have any thoughts or concerns? I am trying to dig into your code to get an understanding of how I could generate the embeddings of the actual and generated images at the loss, but it is not that intuitive at the moment.
Any suggestions on how to incorporate the embedding into a loss function?
cheers
Hi @ljstrnadiii, That sounds like a very cool project. I'm looking forward to hear more about it. You could for example start with train_softmax.py and use your total_loss above instead of the softmax_cross_entropy. But then you of course need to add the VAE parts as well.
@davidsandberg ,
Thanks for the suggestion. I am pretty close to implementation. What is the file format of the pretrained model you restore in the softmax_train.py file? I am pointing the restore function to the ckpt file in the pretrained model dir, but no luck. It says maybe the file is in a different format
I believe it has to do with just restoring the variables in the inception_resnet_v1 model. Any ideas?
Congratulation about new results. I've a couple of question about the results:
Again, congratulation for a really good results!