Open bixiang opened 6 years ago
I have no idea. Maybe you can try to update your pytorch/cuda/cudnn to the newest version.
I got the same result as bixiang's, I use python2.7, pytorch-0.2.0_3, cudnn-6, cuda8.0.
How about your @clcarwin pytorch environment to get the result?
update: same to bixiang, my lfw dataset is not the original official lfw dataset
After that, I got the LFWACC=0.9915 std=0.0054 thd=0.3085, lower than 0.992 reported in your repo.
I also retrained the model from scratch, finally got the training result: Loss=2.0312 | AccT=93.5304%. I found with your default hyperparameter, there's no improvement with more epochs. BTW: the crresponding eval result is 'LFWACC=0.9913 std=0.0048 thd=0.2925'
Could you give me some advice on the training process ? @clcarwin
How's your work recently? @bixiang
@hzshuai I have got the right result. My reason of low LFWACC is my test dataset have been done alignment. As a result, if I use the clcarwin's test code, I will do alignment for testdata twice!
@hzshuai About this model, I am doing some other works and want to discuss with others. I'm from iscas. My wechat is 631885006.
I use your pre_trained model 'sphere20a_20171020.pth' and and the result is 'LFWACC=0.5002 std=0.0005 thd=0.7655.' I use python2.7, pytorch0.3.0.post4, cudnn-6, cuda8.0. And download the lfw dataset from http://vis-www.cs.umass.edu/lfw/lfw.tgz. Anyone can help me ?
@clcarwin After my test, seems like pytorch0.3.0 are not yet support , I use pytorch0.2.0 get the proper result
@hzshuai I got the exactly same test result with you (LFWACC=0.9915 std=0.0054 thd=0.3085). So do you figure out what's the reason for that lower than the reported accuracy? Looking forward to your reply, many thanks.
I use your pre_trained model 'sphere20a_20171020.pth' and the result is 'LFWACC=0.6402 std=0.0146 thd=0.4775'. I'm so confused.