chufengt / ALM-pedestrian-attribute

Code for the paper "Improving Pedestrian Attribute Recognition With Weakly-Supervised Multi-Scale Attribute-Specific Localization", ICCV 2019, http://arxiv.org/abs/1910.04562.
Apache License 2.0
188 stars 66 forks source link

Code understanding !!!! #56

Closed abhigoku10 closed 3 years ago

abhigoku10 commented 3 years ago

@chufengt in the code shared on the github

1.the "is_best" varriable is not used anywhere and "best_acc" variable is saving based on the decay_epoch values , what is the intent behind this

  1. for the model trained on RAP/PETA/PA100k the epoch model saved is of 9/31/8 did you get the best accuracy for short training ?? if so then your decay by default was set at [20,40] would you please elaborate on your design choice

Thanks for your support

chufengt commented 3 years ago
  1. actually i started this code from the official PyTorch imagenet example, so these variables were kept, and sorry for the misunderstanding.
  2. i trained the model for 30/60 epoch, but the final epoch may not be the best. due to the warm-up step mentioned in #55, we can achieve good results at these 'short' epochs.
abhigoku10 commented 3 years ago

@chufengt thanks for the response ,

  1. No issues , just notified you on this
  2. so warm up strategy is helping us to get results at shorter epochs without overfitting
chufengt commented 3 years ago

yes, empirically warm-up is really useful, and the total training cost should also include the warm-up period

abhigoku10 commented 3 years ago

@chufengt can share the code reference to this ??

chufengt commented 3 years ago

it's very easy to modify the current code. just train the BN-Inception (remove all ALM modules) following the standard training configs

abhigoku10 commented 3 years ago

@chufengt so empirically warm-up process means

  1. train RAP/PA100k/PETA dataset using BN-Inception by disabiling ALM modules using std config
  2. Then include ALM and train for further epochs ? I
chufengt commented 3 years ago

yes

abhigoku10 commented 3 years ago

@chufengt thanks for the response , i am trying to change the backbone will get back if any issues

abhigoku10 commented 3 years ago

@chufengt i am trying to validate the metrics on custom dataset , but i am getting weird mA values eg for one of the attribute

  | AP | p_true | n_true | p_tol | n_tol | p_pred | n_pred | cur_mA -- | -- | -- | -- | -- | -- | -- | -- | -- Male | 0.8909090909 | 196 | 175 | 297 | 199 | 220 | 276 | 0.76966 Female | 0.6245614035 | 178 | 192 | 197 | 299 | 285 | 211 | 0.77285 AgeLessThan18 | 0.4642857143 | 13 | 451 | 30 | 466 | 28 | 468 | 0.70057 Age 18-60 | 0.9227642276 | 454 | 1 | 457 | 39 | 492 | 4 | 0.50954 AgeOver60 | 1 | 0 | 488 | 8 | 488 | 0 | 496 | 0.5 For age over 80 how come my current mA is 0.5??
chufengt commented 3 years ago

Because these attributes (age18-60 and ageover60) are highly imbalanced. For example, assume that only several samples with the attribute ageover60, thus the learned model failed to recognize this attribute and predict all samples as negative on this attribute. mA = (p_true/p_tol + n_true/n_tol) / 2 = (0.0 + 1.0) / 2 = 0.5

abhigoku10 commented 3 years ago

@chufengt so there are two chance of the mA value getting generated as 0.5 if ptrue/ntrue is 0 or ptrue=p_tol and n_true = n_tol so which signifies the value 0.5 . is there is any alternative on this understanding

chufengt commented 3 years ago

mA equals 0.5 usually means the model failed to deal with this attribute, some attributes in RAP/PA100K/PETA also got mA around 0.5