KaleidoZhouYN / Sphereface-Ms-celeb-1M

Face Recognition Using A-Softmax_loss on light cleaned Ms_celeb-1M dataset
183 stars 55 forks source link

how to train res20 by ms_im data? #13

Open ctgushiwei opened 6 years ago

ctgushiwei commented 6 years ago

I have train a model use the cleaned ms_1m data it have 79056 person.but when I use m=4 to finetuning,it failed. when i use m=2. the model test on LFW only acheive 99.1%.how to improve the performface

wjgaas commented 6 years ago

with the cleaned ms-celeb-1m list, i get 99.7% on lfw easily when using sphereface-20. the total training contains 4 stage, from m=1 & lr=0.1 to m=4&lr=0.00001.

ctgushiwei commented 6 years ago

@wjgaas could you tell us how to set the lambda_min and the lr at 4 stage. I have trained a model rich 99.5% at LFW and just use m=2

zhouyongxiu commented 6 years ago

@wjgaas Could you pls share your training log, when I trained sphereface-20 with vggface2 db, I find the best result on lfw is from step2, it seems that step3 and step4 cannot help promote the result.

wjgaas commented 6 years ago

@zhouyongxiu I get 99.58%, 99,67%, 99.70% at step2, step3 and step4 seprately

wjgaas commented 6 years ago

@ctgushiwei lambda_min step1:1000~5, step2:5~3, step3:3, step4:3

ctgushiwei commented 6 years ago

@wjgaas could you tell us mo details about your train method. such as ,lr 、batchsize、 train data at different stage.

KaleidoZhouYN commented 6 years ago

yes,m=4 is more robust while facing the hard example,also it is more complex for training. I recommand to use m=4 and lambda_min =3 if you want to reach a higher accuracy.But to be awared that your trainning dataset should be cleaned.