Open khiemntu opened 6 years ago
I tried on my own asian dataset, it is good. But I got a finetune problem, how do you finetune on your dataset???
I just used pre-trained model in this repo and train a classifier on my own dataset (asian faces) But the result not expected, sometime new people not in dataset is similar with one people in dataset up to 90%
All the four provided pre-trained models do not work well with Asian faces. You can refer to #591 and #739 for more discussions.
There are several ideas to improve it, but I haven't tried since I'm working on another issue.
Good luck.
I trained from scratch with own dataset using train_softmax but I got nan loss after several step in epoch 0. What problem in here ? :(. I have tried to train with a small part of LFW data but it also nan after several step. I set learning rate 0.01 and 0.00001 but it still happen. Anyone have experience in this problem ? Many thanks
Anybody have transfered learning Pre-trained model on small Asian dataset? @khiemntu have you restored Pre-trained model in train_tripletloss.py?
@dotieuthien i restored in train_softmax.py
Hi @khiemntu,
How do you train your own data using train_softmax.py? I tried but I can't seem to train it.
hi @caocuong0306 , I also want to finetune the Asian datasets. However, I don't know how to validate the model train by the Asian datasets. Can you share your idea. Thanks so much,
Hi @khiemntu I also fintune the asian face datasets use tripletloss method with pretrained model is model-20180402-114759.ckpt-275 and get an error: "InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [128] rhs shape= [512] [[Node: save/Assign_20 = Assign[T=DT_FLOAT, _class=["loc:@InceptionResnetV1/Bottleneck/BatchNorm/moving_variance"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](InceptionResnetV1/Bottleneck/BatchNorm/moving_variance, save/RestoreV2/_1239)]]"
Please help me if you have any solution for this problem. Thank you so much.
Hi Viet Hung, i also did that. I restored meta graph and then restored file data to fine-tune. Could you please to share me the Asian face dataset. Thank you.
On Tue, Aug 21, 2018 at 7:38 PM Nguyen Viet Hung notifications@github.com wrote:
Hi @khiemntu https://github.com/khiemntu I also fintune the asian face datasets use tripletloss method with pretrained model is model-20180402-114759.ckpt-275 and get an error same with you: "InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [128] rhs shape= [512] [[Node: save/Assign_20 = AssignT=DT_FLOAT, _class=["loc:@InceptionResnetV1/Bottleneck/BatchNorm/moving_variance"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]"
Please help me if you have any solution for this problem. Thank you so much.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/davidsandberg/facenet/issues/830#issuecomment-414659054, or mute the thread https://github.com/notifications/unsubscribe-auth/AdKfiwKWmPcM1G7DgCuqhNCVF9W9O3gjks5uS_9LgaJpZM4Vnr5t .
hi @dotieuthien , We can check here: http://trillionpairs.deepglint.com/overview
hi all, I have finetune the Asian Face dataset (Asian-Celeb: http://trillionpairs.deepglint.com/overview). beside, I have other my private dataset. (also Asian face). In the original repo of @davidsandberg have evaluate on the LWF use LWF dataset and pairs.txt. How to evaluate on the Asian-Celeb and my private dataset? Do I have to create pairs.txt file as LWF ?
Thank you so much.
hi @viethungtsdv When I did train_tripletloss training on the pre-training model, an error occurred: index 22 is out of bounds for axis 0 with size 22, which is my fault. Is the parameter wrong? Can you help me?
@azhaoaigaga, I had a similar error when training the model using images aligned by alignment script. In my case, this occurs due to corrupted image files produced by alignment script. I've solved this by verifying every aligned image used before run train script. To verify, I used verify method in Image from PIL module.
I also fintune the asian face datasets use tripletloss method with pretrained model is model-20180402-114759.ckpt-275 and get an error: "InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [128] rhs shape= [512] [[Node: save/Assign_20 = Assign[T=DT_FLOAT, _class=["loc:@InceptionResnetV1/Bottleneck/BatchNorm/moving_variance"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](InceptionResnetV1/Bottleneck/BatchNorm/moving_variance, save/RestoreV2/_1239)]]"
Please help me if you have any solution for this problem. Thank you so much.
@viethungtsdv ,Could you share the command that you are running to execute the fine-tuning in a pretrained model, please? I'm also trying to retrain the model using a specific dataset... Thanks in advance.
hi @rgsousa88 ,
I run command follow: https://github.com/davidsandberg/facenet/wiki/Triplet-loss-training
嗨@ rgsousa88,
我运行命令遵循:https : //github.com/davidsandberg/facenet/wiki/Triplet-loss-training
Have you solved this problem? I have encountered this problem now. Could you please give me some advice?
Hi it's a hot topic. i want to share some of my insight I'm currently working on fintuning on VietNamese dataset on Facenet model (the pytoch version from pytorch: https://github.com/timesler/facenet-pytorch) My repo: https://github.com/mrzaizai2k/face-recogtion-mlops/blob/main/src/finetune_facenet_2.py I finetune it with online triplet loss Train dataset: https://github.com/anhtu293/Vietnamese-Celebrity-Face-Recognition around 10k ids, 4000 imgs -> I set 10% to test. Some Insight:
你好,已收到你的邮件,谢谢!
Anybody have Pre-trained model on ASIAN faces dataset ? I used model in this repo and result not good for asian face. :(