iamhankai / attribute-aware-attention

[ACM MM 2018] Attribute-Aware Attention Model for Fine-grained Representation Learning
https://arxiv.org/abs/1901.00392
156 stars 30 forks source link

After epoch 11/50 , Main acc only about 0.02 #17

Closed EpilogueCc closed 4 years ago

EpilogueCc commented 5 years ago

I use net vgg16 and resnet50, after training the main acc of both are only about 0.02 ,how can I figure out this problem

EpilogueCc commented 5 years ago

Epoch 17/50

val-loss: [6.9950135724479505, 5.028487854017074, 1.764361070422086, 2.220055566066127, 2.261857317446181, 2.087330848050603, 1.3224303419111842, 2.2442183868202785, 1.8066966793723958, 2.1630669802602176, 2.2986507000164367, 2.109208341339435, 2.088975465807949, 1.132305346352836, 0.98253339843829, 2.303563471535053, 2.186547696775595, 2.3308233599190387, 2.0905975601695346, 1.6242821622314558, 1.2672019386069298, 2.059300786787615, 1.498326289394538, 1.473406793991368, 1.3299376726726602, 2.1431036723824586, 2.2010866641833693, 2.0623615495410506, 2.3213371576257193, 1.511554585229046, 5.044938737785648] val-acc: [0.023645150155333104, 0.4016223679668623, 0.2694166379012772, 0.2483603727994477, 0.3665861235761132, 0.5440110459095616, 0.24956851915774939, 0.28512254055919917, 0.340697273041077, 0.2516396272005523, 0.3531239212978944, 0.37780462547462895, 0.8141180531584398, 0.5730065585088022, 0.27856403175699, 0.2871936486020021, 0.2537107352433552, 0.37418018639972384, 0.3374180186399724, 0.516396272005523, 0.4887814981014843, 0.3448394891266828, 0.35329651363479464, 0.5731791508457025, 0.309285467725233, 0.24180186399723852, 0.40352088367276495, 0.2652744218156714, 0.31308249913703834, 0.034173282706247844] Main acc: 0.034173 Epoch 18/50 train-loss: 7.3917, train-acc: 0.0173 0.3324 0.0180

iamhankai commented 5 years ago

Your train-loss is relatively large, I guess the model does not converge at all. Try change your learning rate smaller, adding BN into VGG16.