wy1iu / sphereface

Implementation for <SphereFace: Deep Hypersphere Embedding for Face Recognition> in CVPR'17.
MIT License
1.58k stars 543 forks source link

Training Cant't Converge. #30

Open belleoct opened 7 years ago

belleoct commented 7 years ago

Hi,I followed your instruction to train CASIA-WebFace dataset, but the training loss stays around 9.3 at the very beginning. Could you please give some possible advises to solve this problem? Here is part of my train log.

train log

I0907 20:56:08.035786 10605 caffe.cpp:218] Using GPUs 0 I0907 20:56:08.148684 10605 caffe.cpp:223] GPU 0: Tesla P40 I0907 20:56:08.807466 10605 solver.cpp:44] Initializing solver from parameters:

base_lr: 0.01 display: 100 max_iter: 28000 lr_policy: "multistep" gamma: 0.1 momentum: 0.9 weight_decay: 0.0005 snapshot: 1000 snapshot_prefix: "result/sphereface_model_maxpy_clean/ckpt0907_2" solver_mode: GPU device_id: 0 net: "code/sphereface_model_maxpy_clean.prototxt" train_state { level: 0 stage: "" } stepvalue: 16000 stepvalue: 24000 stepvalue: 28000 I0907 20:56:08.812855 10605 solver.cpp:87] Creating training net from net file: code/sphereface_model_maxpy_clean.prototxt I0907 20:56:08.815649 10605 net.cpp:51] Initializing net from parameters: name: "SpherefaceNet-20"

skip the model part

I0907 20:56:11.030130 10605 solver.cpp:218] Iteration 0 (-1.9811e-31 iter/s, 0.203529s/100 iters), loss = 9.20592 I0907 20:56:11.030190 10605 solver.cpp:237] Train net output #0: lambda = 892.857 I0907 20:56:11.030210 10605 solver.cpp:237] Train net output #1: softmax_loss = 9.20592 ( 1 = 9.20592 loss) I0907 20:56:11.030252 10605 sgd_solver.cpp:105] Iteration 0, lr = 0.01 I0907 20:56:11.079816 10605 blocking_queue.cpp:49] Waiting for data I0907 20:57:30.478466 10605 solver.cpp:218] Iteration 100 (1.25873 iter/s, 79.4451s/100 iters), loss = 9.33074 I0907 20:57:30.478658 10605 solver.cpp:237] Train net output #0: lambda = 76.2195 I0907 20:57:30.478673 10605 solver.cpp:237] Train net output #1: softmax_loss = 9.33074 ( 1 = 9.33074 loss) I0907 20:57:30.478684 10605 sgd_solver.cpp:105] Iteration 100, lr = 0.01 I0907 20:58:46.554661 10605 solver.cpp:218] Iteration 200 (1.31453 iter/s, 76.073s/100 iters), loss = 9.30099 I0907 20:58:46.554803 10605 solver.cpp:237] Train net output #0: lambda = 39.8089 I0907 20:58:46.554821 10605 solver.cpp:237] Train net output #1: softmax_loss = 9.30099 ( 1 = 9.30099 loss) I0907 20:58:46.554834 10605 sgd_solver.cpp:105] Iteration 200, lr = 0.01 I0907 21:00:32.839994 10605 solver.cpp:218] Iteration 300 (0.940902 iter/s, 106.281s/100 iters), loss = 9.41046 I0907 21:00:32.840147 10605 solver.cpp:237] Train net output #0: lambda = 26.9397 I0907 21:00:32.840165 10605 solver.cpp:237] Train net output #1: softmax_loss = 9.41046 ( 1 = 9.41046 loss) I0907 21:00:32.840178 10605 sgd_solver.cpp:105] Iteration 300, lr = 0.01 I0907 21:02:01.073664 10605 solver.cpp:218] Iteration 400 (1.1334 iter/s, 88.23s/100 iters), loss = 9.35314 I0907 21:02:01.073825 10605 solver.cpp:237] Train net output #0: lambda = 20.3583 I0907 21:02:01.073848 10605 solver.cpp:237] Train net output #1: softmax_loss = 9.35314 ( 1 = 9.35314 loss) I0907 21:02:01.073860 10605 sgd_solver.cpp:105] Iteration 400, lr = 0.01 I0907 21:03:35.687438 10605 solver.cpp:218] Iteration 500 (1.05697 iter/s, 94.6099s/100 iters), loss = 9.42017 I0907 21:03:35.687583 10605 solver.cpp:237] Train net output #0: lambda = 16.3613 I0907 21:03:35.687602 10605 solver.cpp:237] Train net output #1: softmax_loss = 9.42017 ( 1 = 9.42017 loss) I0907 21:03:35.687613 10605 sgd_solver.cpp:105] Iteration 500, lr = 0.01 I0907 21:05:06.772781 10605 solver.cpp:218] Iteration 600 (1.09792 iter/s, 91.0816s/100 iters), loss = 9.3796 I0907 21:05:06.772953 10605 solver.cpp:237] Train net output #0: lambda = 13.6761 I0907 21:05:06.772974 10605 solver.cpp:237] Train net output #1: softmax_loss = 9.3796 ( 1 = 9.3796 loss) I0907 21:05:06.772987 10605 sgd_solver.cpp:105] Iteration 600, lr = 0.01 I0907 21:06:23.862064 10605 solver.cpp:218] Iteration 700 (1.29725 iter/s, 77.086s/100 iters), loss = 9.37048 I0907 21:06:23.862254 10605 solver.cpp:237] Train net output #0: lambda = 11.7481 I0907 21:06:23.862272 10605 solver.cpp:237] Train net output #1: softmax_loss = 9.37048 ( 1 = 9.37048 loss) P.S I've read the issue #14 and issue #7 , but neither provides feasible solutions. Could you please provide your solver settings and the lambda related parameters(lambda min, base and gamma) for trainning? Thx a lot.

ydwen commented 7 years ago

Hi, belleoct. From my observation, you did NOT strictly follow our instructions. Please remove your modifications and re-run it again. Thanks!

belleoct commented 7 years ago

@ydwen Could you please be more specific? The differences I can think of are that I only used one GPU and I modified base_lr to 0.01. I modified lr to 0.01 becase I met the same problem in issue #14: the softmax_loss stays at 87.3365 even after lots of iterations. Any advices? thx!

ydwen commented 7 years ago

From the limited cues, I guess you can try 2 GPUs with learning rate of 0.1.

belleoct commented 7 years ago

@ydwen Hi,I 've tried your suggestion, I changed lr to 0.1, and use two GPUs to train the model. But the problem isn't solved, the softmax loss stuck at around 9.3. Here is my training log. I0911 15:35:15.479154 4385 caffe.cpp:218] Using GPUs 2, 3 I0911 15:35:15.757864 4385 caffe.cpp:223] GPU 2: Tesla P40 I0911 15:35:15.758514 4385 caffe.cpp:223] GPU 3: Tesla P40 I0911 15:35:16.669713 4385 solver.cpp:44] Initializing solver from parameters: base_lr: 0.1 display: 100 max_iter: 28000 lr_policy: "multistep" gamma: 0.1 momentum: 0.9 weight_decay: 0.0005 snapshot: 10000 snapshot_prefix: "result/sphereface_model_maxpy_clean/0911clean" solver_mode: GPU device_id: 2 net: "code/sphereface_model_maxpy_clean.prototxt" train_state { level: 0 stage: "" } stepvalue: 16000 stepvalue: 24000 stepvalue: 28000 I0911 15:35:16.676537 4385 solver.cpp:87] Creating training net from net file: code/sphereface_model_maxpy_clean.prototxt I0911 15:35:16.880503 4385 net.cpp:51] Initializing net from parameters: name: "SpherefaceNet-20" state { phase: TRAIN level: 0 stage: "" } layer { name: "data" type: "ImageData" top: "data" top: "label" transform_param { scale: 0.0078125 mirror: true mean_value: 127.5 mean_value: 127.5 mean_value: 127.5 } image_data_param { source: "/home/zhangyubelle_i/Desktop/sphereface/preprocess/data/preprocessed_CASIA-WebFace.txt" batch_size: 128 shuffle: true } } ...... layer { name: "fc6" type: "MarginInnerProduct" bottom: "fc5" bottom: "label" top: "fc6" top: "lambda" param { lr_mult: 1 decay_mult: 1 } margin_inner_product_param { num_output: 10572 type: QUADRUPLE weight_filler { type: "xavier" } base: 1000 gamma: 0.15 power: 1 iteration: 0 lambda_min: 5 } } layer { name: "softmax_loss" type: "SoftmaxWithLoss" bottom: "fc6" bottom: "label" top: "softmax_loss" } ...... I0911 15:35:19.226610 4385 solver.cpp:56] Solver scaffolding done. I0911 15:35:19.229393 4385 caffe.cpp:248] Starting Optimization I0911 15:35:19.852618 4390 image_data_layer.cpp:38] Opening file /home/zhangyubelle_i/Desktop/sphereface/preprocess/data/preprocessed_CASIA-WebFace.txt I0911 15:35:20.064330 4390 image_data_layer.cpp:53] Shuffling data I0911 15:35:20.174933 4390 image_data_layer.cpp:63] A total of 491277 images. I0911 15:35:20.179183 4390 image_data_layer.cpp:90] output data size: 128,3,112,96 I0911 15:35:21.359104 4385 solver.cpp:272] Solving SpherefaceNet-20 I0911 15:35:21.359139 4385 solver.cpp:273] Learning Rate Policy: multistep I0911 15:35:21.359326 4390 blocking_queue.cpp:49] Waiting for data I0911 15:35:21.980854 4385 solver.cpp:218] Iteration 0 (-7.53116e+27 iter/s, 0.609721s/100 iters), loss = 9.30027 I0911 15:35:21.980906 4385 solver.cpp:237] Train net output #0: lambda = 869.565 I0911 15:35:21.980917 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.30027 ( 1 = 9.30027 loss) I0911 15:35:21.980929 4385 sgd_solver.cpp:105] Iteration 0, lr = 0.1 I0911 15:37:24.752915 4385 solver.cpp:218] Iteration 100 (0.81455 iter/s, 122.767s/100 iters), loss = 9.34323 I0911 15:37:24.753118 4385 solver.cpp:237] Train net output #0: lambda = 61.9195 I0911 15:37:24.753131 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.34323 ( 1 = 9.34323 loss) I0911 15:37:24.753142 4385 sgd_solver.cpp:105] Iteration 100, lr = 0.1 I0911 15:39:45.001121 4385 solver.cpp:218] Iteration 200 (0.713051 iter/s, 140.242s/100 iters), loss = 9.30479 I0911 15:39:45.001214 4385 solver.cpp:237] Train net output #0: lambda = 32.1027 I0911 15:39:45.001230 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.30479 ( 1 = 9.30479 loss) I0911 15:39:45.001242 4385 sgd_solver.cpp:105] Iteration 200, lr = 0.1 I0911 15:41:11.225064 4385 solver.cpp:218] Iteration 300 (1.15982 iter/s, 86.2204s/100 iters), loss = 9.40264 I0911 15:41:11.225152 4385 solver.cpp:237] Train net output #0: lambda = 21.6685 I0911 15:41:11.225167 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.40264 ( 1 = 9.40264 loss) I0911 15:41:11.225179 4385 sgd_solver.cpp:105] Iteration 300, lr = 0.1 I0911 15:42:25.334173 4385 solver.cpp:218] Iteration 400 (1.34942 iter/s, 74.106s/100 iters), loss = 9.31309 I0911 15:42:25.334270 4385 solver.cpp:237] Train net output #0: lambda = 16.3532 I0911 15:42:25.334285 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.31309 ( 1 = 9.31309 loss) I0911 15:42:25.334297 4385 sgd_solver.cpp:105] Iteration 400, lr = 0.1 I0911 15:43:35.368954 4385 solver.cpp:218] Iteration 500 (1.42792 iter/s, 70.0319s/100 iters), loss = 9.37931 I0911 15:43:35.369074 4385 solver.cpp:237] Train net output #0: lambda = 13.132 I0911 15:43:35.369092 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.37931 ( 1 = 9.37931 loss) I0911 15:43:35.369105 4385 sgd_solver.cpp:105] Iteration 500, lr = 0.1 I0911 15:44:38.979123 4385 solver.cpp:218] Iteration 600 (1.57214 iter/s, 63.6075s/100 iters), loss = 9.48879 I0911 15:44:38.979224 4385 solver.cpp:237] Train net output #0: lambda = 10.9709 I0911 15:44:38.979239 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.48879 ( 1 = 9.48879 loss) I0911 15:44:38.979252 4385 sgd_solver.cpp:105] Iteration 600, lr = 0.1 I0911 15:45:40.613428 4385 solver.cpp:218] Iteration 700 (1.62254 iter/s, 61.6317s/100 iters), loss = 9.41956 I0911 15:45:40.613536 4385 solver.cpp:237] Train net output #0: lambda = 9.42063 I0911 15:45:40.613553 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.41956 ( 1 = 9.41956 loss) I0911 15:45:40.613565 4385 sgd_solver.cpp:105] Iteration 700, lr = 0.1 I0911 15:46:39.775218 4385 solver.cpp:218] Iteration 800 (1.69035 iter/s, 59.1593s/100 iters), loss = 9.47148 I0911 15:46:39.775308 4385 solver.cpp:237] Train net output #0: lambda = 8.25423 I0911 15:46:39.775321 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.47148 ( 1 = 9.47148 loss) I0911 15:46:39.775334 4385 sgd_solver.cpp:105] Iteration 800, lr = 0.1 I0911 15:47:40.728682 4385 solver.cpp:218] Iteration 900 (1.64067 iter/s, 60.9509s/100 iters), loss = 9.40742 I0911 15:47:40.728765 4385 solver.cpp:237] Train net output #0: lambda = 7.34484 I0911 15:47:40.728782 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.40742 ( 1 = 9.40742 loss) I0911 15:47:40.728796 4385 sgd_solver.cpp:105] Iteration 900, lr = 0.1 I0911 15:48:36.854068 4390 blocking_queue.cpp:49] Waiting for data I0911 15:48:42.821414 4385 solver.cpp:218] Iteration 1000 (1.61056 iter/s, 62.0901s/100 iters), loss = 9.40826 I0911 15:48:42.821473 4385 solver.cpp:237] Train net output #0: lambda = 6.61594 I0911 15:48:42.821490 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.40826 ( 1 = 9.40826 loss) I0911 15:48:42.821503 4385 sgd_solver.cpp:105] Iteration 1000, lr = 0.1 I0911 15:49:45.064177 4385 solver.cpp:218] Iteration 1100 (1.60668 iter/s, 62.2402s/100 iters), loss = 9.48529 I0911 15:49:45.064288 4385 solver.cpp:237] Train net output #0: lambda = 6.01866 I0911 15:49:45.064301 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.48529 ( 1 = 9.48529 loss) I0911 15:49:45.064316 4385 sgd_solver.cpp:105] Iteration 1100, lr = 0.1 I0911 15:50:43.845634 4385 solver.cpp:218] Iteration 1200 (1.70129 iter/s, 58.779s/100 iters), loss = 9.45689 I0911 15:50:43.845721 4385 solver.cpp:237] Train net output #0: lambda = 5.52029 I0911 15:50:43.845734 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.45689 ( 1 = 9.45689 loss) I0911 15:50:43.845754 4385 sgd_solver.cpp:105] Iteration 1200, lr = 0.1 I0911 15:51:40.145510 4385 solver.cpp:218] Iteration 1300 (1.77628 iter/s, 56.2975s/100 iters), loss = 9.51089 I0911 15:51:40.145606 4385 solver.cpp:237] Train net output #0: lambda = 5.09814 I0911 15:51:40.145623 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.51089 ( 1 = 9.51089 loss) I0911 15:51:40.145639 4385 sgd_solver.cpp:105] Iteration 1300, lr = 0.1 I0911 15:52:34.460177 4385 solver.cpp:218] Iteration 1400 (1.8412 iter/s, 54.3124s/100 iters), loss = 9.45659 I0911 15:52:34.460285 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 15:52:34.460302 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.45659 ( 1 = 9.45659 loss) I0911 15:52:34.460317 4385 sgd_solver.cpp:105] Iteration 1400, lr = 0.1 I0911 15:53:32.810554 4385 solver.cpp:218] Iteration 1500 (1.71386 iter/s, 58.3479s/100 iters), loss = 9.53624 I0911 15:53:32.810647 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 15:53:32.810662 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.53624 ( 1 = 9.53624 loss) I0911 15:53:32.810674 4385 sgd_solver.cpp:105] Iteration 1500, lr = 0.1 I0911 15:54:24.762370 4385 solver.cpp:218] Iteration 1600 (1.92494 iter/s, 51.9496s/100 iters), loss = 9.60644 I0911 15:54:24.762482 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 15:54:24.762496 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.60644 ( 1 = 9.60644 loss) I0911 15:54:24.762511 4385 sgd_solver.cpp:105] Iteration 1600, lr = 0.1 I0911 15:55:18.906551 4385 solver.cpp:218] Iteration 1700 (1.847 iter/s, 54.1419s/100 iters), loss = 9.57607 I0911 15:55:18.906657 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 15:55:18.906672 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.57607 ( 1 = 9.57607 loss) I0911 15:55:18.906685 4385 sgd_solver.cpp:105] Iteration 1700, lr = 0.1 I0911 15:56:19.890298 4385 solver.cpp:218] Iteration 1800 (1.63985 iter/s, 60.9812s/100 iters), loss = 9.32032 I0911 15:56:19.890413 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 15:56:19.890435 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.32032 ( 1 = 9.32032 loss) I0911 15:56:19.890450 4385 sgd_solver.cpp:105] Iteration 1800, lr = 0.1 I0911 15:58:04.229815 4385 solver.cpp:218] Iteration 1900 (0.958449 iter/s, 104.335s/100 iters), loss = 9.59339 I0911 15:58:04.229905 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 15:58:04.229919 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.59339 ( 1 = 9.59339 loss) I0911 15:58:04.229933 4385 sgd_solver.cpp:105] Iteration 1900, lr = 0.1 I0911 15:59:00.448514 4390 blocking_queue.cpp:49] Waiting for data I0911 15:59:07.764297 4385 solver.cpp:218] Iteration 2000 (1.57401 iter/s, 63.5318s/100 iters), loss = 9.58063 I0911 15:59:07.764348 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 15:59:07.764364 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.58063 ( 1 = 9.58063 loss) I0911 15:59:07.764376 4385 sgd_solver.cpp:105] Iteration 2000, lr = 0.1 I0911 16:00:14.480545 4385 solver.cpp:218] Iteration 2100 (1.49895 iter/s, 66.7135s/100 iters), loss = 9.29204 I0911 16:00:14.480648 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 16:00:14.480661 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.29204 ( 1 = 9.29204 loss) I0911 16:00:14.480674 4385 sgd_solver.cpp:105] Iteration 2100, lr = 0.1 I0911 16:01:20.100481 4385 solver.cpp:218] Iteration 2200 (1.52399 iter/s, 65.6172s/100 iters), loss = 9.44032 I0911 16:01:20.100570 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 16:01:20.100584 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.44032 ( 1 = 9.44032 loss) I0911 16:01:20.100595 4385 sgd_solver.cpp:105] Iteration 2200, lr = 0.1 I0911 16:02:17.757122 4385 solver.cpp:218] Iteration 2300 (1.73448 iter/s, 57.6542s/100 iters), loss = 9.4312 I0911 16:02:17.757239 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 16:02:17.757253 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.4312 ( 1 = 9.4312 loss) I0911 16:02:17.757266 4385 sgd_solver.cpp:105] Iteration 2300, lr = 0.1 I0911 16:03:08.928268 4385 solver.cpp:218] Iteration 2400 (1.95431 iter/s, 51.1689s/100 iters), loss = 9.51902 I0911 16:03:08.928356 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 16:03:08.928370 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.51902 ( 1 = 9.51902 loss) I0911 16:03:08.928383 4385 sgd_solver.cpp:105] Iteration 2400, lr = 0.1 I0911 16:03:59.680601 4385 solver.cpp:218] Iteration 2500 (1.97044 iter/s, 50.7502s/100 iters), loss = 9.34781 I0911 16:03:59.680691 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 16:03:59.680707 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.34781 ( 1 = 9.34781 loss) I0911 16:03:59.680722 4385 sgd_solver.cpp:105] Iteration 2500, lr = 0.1 I0911 16:04:59.077188 4385 solver.cpp:218] Iteration 2600 (1.68367 iter/s, 59.3941s/100 iters), loss = 9.35335 I0911 16:04:59.077302 4385 solver.cpp:237] Train net output #0: lambda = 5 I0911 16:04:59.077317 4385 solver.cpp:237] Train net output #1: softmax_loss = 9.35335 (* 1 = 9.35335 loss) I0911 16:04:59.077332 4385 sgd_solver.cpp:105] Iteration 2600, lr = 0.1

Any ideas? Thx!

sweatdream commented 6 years ago

@belleoct I want to know whether you have solved the problem, I've met the same problem as you. thx

xhzcyc commented 6 years ago

Did you solved your problem? Thanks @belleoct

belleoct commented 6 years ago

@xhzcyc @sweatdream I change the batch size to 128 and use multi-GPUs. And the softmax-loss start to converge after lambda get steady.

zjz5250 commented 6 years ago

@belleoct my train can not converge,l only use one gpu。do i have to use mutli-gpu