wy1iu / LargeMargin_Softmax_Loss

Implementation for <Large-Margin Softmax Loss for Convolutional Neural Networks> in ICML'16.
Other
350 stars 115 forks source link

Test the L-Softmax loss on my dataset #8

Closed xqpinitial closed 7 years ago

xqpinitial commented 7 years ago

################## train ################## layer { name: "fc8_2" type: "LargeMarginInnerProduct" bottom: "fc7" bottom: "label" top: "fc8" top: "lambda" param { name: "fc8" lr_mult: 10 } largemargin_inner_product_param { num_output: 101 type: DOUBLE base: 1000 gamma: 0.00002 power: 45 iteration: 0 weight_filler { type: "msra" } } include { phase: TRAIN } }

Test net output #1: lambda = 0 Test net output #2: loss = 87.2935 (* 1 = 87.2935 loss) Iteration 0, loss = 22.7132

Train net output #0: lambda = 996.407 Train net output #1: loss = 26.682 (* 1 = 26.682 loss) Iteration 0, lr = 5e-05 Iteration 50, loss = 5.74099

Train net output #0: lambda = 832.577 Train net output #1: loss = 5.74778 (* 1 = 5.74778 loss) teration 50, lr = 5e-05 Iteration 100, loss = 5.24638

Train net output #0: lambda = 696.185 Train net output #1: loss = 5.34603 (* 1 = 5.34603 loss) Iteration 100, lr = 5e-05 Iteration 150, loss = 5.04773

Train net output #0: lambda = 582.55 Train net output #1: loss = 5.15273 (* 1 = 5.15273 loss) Iteration 150, lr = 5e-05 I fine-turn from vgg on my new data. But the loss is to large and I try different lr .from your experience ,what should i do. Thanks

xqpinitial commented 7 years ago

I changed base: 1000 ->10000 gamma: 0.00002 ->0.00005 it works