Closed SeuTao closed 6 years ago
Thanks for sharing!I’ve tried your AMSoftmax using MsCeleb-1m on my own model. With the default param setting (m0.35,s30) , I can get slightly better results than A_softmax. For large datasets with much more identities, do u have any advice for tuning the params(m&s) ?
If you have more identities, it is better to set a bigger s. I haven't done the experiment, but I guess 60 may be suitable for MSCeleb-1m. The m is depending on the difficulty of the dataset. Now I have no idea about the best value for MSCeleb-1m. Maybe you can only search for it...
Thanks again!!! I’ll try it.
train on MSCeleb-1m ,not pca, not delete overlapped on lfw 。 result: 0.238 ,0.99650000,0.00266667 , 0.00433333. AMSoftmax is very good loss function,easier to train 。 @happynear Thanks for your good jobs!
Using vggface2, I can get better results on LFW, and a similar result on MegaFace.
Same with the order in the paper: 99.50% | 97.97% | 99.37% | 93.13%|72.78% | 86.29%
I haven't done experiments using MSCeleb-1m because I still haven't gotten the overlapped list between MSCeleb and LFW/FaceScrub.