Closed abhigoku10 closed 3 years ago
@chufengt thanks for the response ,
yes, empirically warm-up is really useful, and the total training cost should also include the warm-up period
@chufengt can share the code reference to this ??
it's very easy to modify the current code. just train the BN-Inception (remove all ALM modules) following the standard training configs
@chufengt so empirically warm-up process means
yes
@chufengt thanks for the response , i am trying to change the backbone will get back if any issues
@chufengt i am trying to validate the metrics on custom dataset , but i am getting weird mA values eg for one of the attribute
Because these attributes (age18-60 and ageover60) are highly imbalanced. For example, assume that only several samples with the attribute ageover60, thus the learned model failed to recognize this attribute and predict all samples as negative on this attribute. mA = (p_true/p_tol + n_true/n_tol) / 2 = (0.0 + 1.0) / 2 = 0.5
@chufengt so there are two chance of the mA value getting generated as 0.5 if ptrue/ntrue is 0 or ptrue=p_tol and n_true = n_tol so which signifies the value 0.5 . is there is any alternative on this understanding
mA equals 0.5 usually means the model failed to deal with this attribute, some attributes in RAP/PA100K/PETA also got mA around 0.5
@chufengt in the code shared on the github
1.the "is_best" varriable is not used anywhere and "best_acc" variable is saving based on the decay_epoch values , what is the intent behind this
Thanks for your support