wy1iu / LargeMargin_Softmax_Loss

Implementation for <Large-Margin Softmax Loss for Convolutional Neural Networks> in ICML'16.
Other
349 stars 115 forks source link

trian_accuracy decrease? #18

Closed moyans closed 7 years ago

moyans commented 7 years ago

I0630 11:22:50.776134 23843 solver.cpp:337] Iteration 4000, Testing net (#0) I0630 11:22:54.062695 23843 solver.cpp:404] Test net output #0: accuracy = 1 I0630 11:22:54.062719 23843 solver.cpp:404] Test net output #1: lambda = 60.444 I0630 11:22:54.062727 23843 solver.cpp:404] Test net output #2: loss = 1.59741e-09 ( 1 = 1.59741e-09 loss) I0630 11:22:54.148296 23843 solver.cpp:225] Iteration 4000 (8.7286 iter/s, 11.4566s/100 iters), loss = 0.327719 I0630 11:22:54.148321 23843 solver.cpp:244] Train net output #0: accuracy = 0.9375 I0630 11:22:54.148326 23843 solver.cpp:244] Train net output #1: lambda = 1.37052 I0630 11:22:54.148346 23843 solver.cpp:244] Train net output #2: loss = 0.327719 ( 1 = 0.327719 loss) I0630 11:22:54.148352 23843 sgd_solver.cpp:137] Iteration 4000, lr = 0.0001 I0630 11:23:02.281814 23843 solver.cpp:225] Iteration 4100 (12.295 iter/s, 8.13342s/100 iters), loss = 0.0623774 I0630 11:23:02.281839 23843 solver.cpp:244] Train net output #0: accuracy = 0.984375 I0630 11:23:02.281844 23843 solver.cpp:244] Train net output #1: lambda = 1.23743 I0630 11:23:02.281863 23843 solver.cpp:244] Train net output #2: loss = 0.0623773 (* 1 = 0.0623773 loss)

.....

I0630 11:23:59.213723 23843 solver.cpp:225] Iteration 4800 (12.2984 iter/s, 8.13116s/100 iters), loss = 1.32174 I0630 11:23:59.213876 23843 solver.cpp:244] Train net output #0: accuracy = 0.585938 I0630 11:23:59.213886 23843 solver.cpp:244] Train net output #1: lambda = 0.609189 I0630 11:23:59.213891 23843 solver.cpp:244] Train net output #2: loss = 1.32174 ( 1 = 1.32174 loss) I0630 11:23:59.213896 23843 sgd_solver.cpp:137] Iteration 4800, lr = 0.0001 I0630 11:24:07.347139 23843 solver.cpp:225] Iteration 4900 (12.2953 iter/s, 8.13318s/100 iters), loss = 1.92986 I0630 11:24:07.347164 23843 solver.cpp:244] Train net output #0: accuracy = 0.429688 I0630 11:24:07.347169 23843 solver.cpp:244] Train net output #1: lambda = 0.551035 I0630 11:24:07.347175 23843 solver.cpp:244] Train net output #2: loss = 1.92986 ( 1 = 1.92986 loss)

......

I0630 11:24:59.453804 23843 solver.cpp:225] Iteration 5500 (12.2901 iter/s, 8.13665s/100 iters), loss = 2.45443 I0630 11:24:59.453843 23843 solver.cpp:244] Train net output #0: accuracy = 0.0078125 I0630 11:24:59.453848 23843 solver.cpp:244] Train net output #1: lambda = 0.30322 I0630 11:24:59.453868 23843 solver.cpp:244] Train net output #2: loss = 2.45443 ( 1 = 2.45443 loss) I0630 11:24:59.453873 23843 sgd_solver.cpp:137] Iteration 5500, lr = 0.0001 I0630 11:25:07.590095 23843 solver.cpp:225] Iteration 5600 (12.2908 iter/s, 8.13617s/100 iters), loss = 2.40135 I0630 11:25:07.590245 23843 solver.cpp:244] Train net output #0: accuracy = 0 I0630 11:25:07.590270 23843 solver.cpp:244] Train net output #1: lambda = 0.274696 I0630 11:25:07.590275 23843 solver.cpp:244] Train net output #2: loss = 2.40135 ( 1 = 2.40135 loss)

layer { name: "fc10" type: "LargeMarginInnerProduct" bottom: "person" bottom: "label" top: "fc10" top: "lambda" param { name: "ip2" lr_mult: 1 } largemargin_inner_product_param { num_output: 10 type: TRIPLE weight_filler { type: "xavier" } base: 100 gamma: 2.5e-05 power: 45 iteration: 0 lambda_min: 0 } }

moyans commented 7 years ago

@wy1iu My task is very simple, so the network can quickly converge, but I want to increase the spacing between classes. However, the training loss has collapsed。