Open nashory opened 4 years ago
Hey. Could I know your system specs?[Pytorch version etc]
My reproduced result is nearly same as you, the style recall result is weird
@ousinkou I ended up with not trusting the "style" score, please refer to my paper accepted to ECCV 2020. https://arxiv.org/abs/2007.06769
@nashory Well, thanks for your reply. Style scores were not reproducible using publicly released code,that's sad.
@nashory Hi, I have a question to consult you. Before we train the landmark branch and the category/attribute prediction network jointly, are we supposed to train the landmark branch solely and use its weight in category/attribute network?Any reply will be highly appreciated.
@ousinkou Hi, I have a question to consult you. Before we train the landmark branch and the category/attribute prediction network jointly, are we supposed to train the landmark branch solely and use its weight in category/attribute network?Any reply will be highly appreciated.
Hi, I have a question. How do you get your ALL@top3 58.02 and ALL@top5 64.35, I have got the similar results as you on the five attributes groups. But, my ALL@top3 and ALL@top5 is very low.
Any prediction code is very appreciate.
@nashory can you please share the pretrained weigth file and inference code it would be helpful
@pbamotra, @xuanle22, @ZhuXiang-tuxing Hi, I ran the code to confirm if the performance is reproduces properly. Although I customized the metric functions to output the f1 scores together, but the result should be reproducible as it is. (I failed to reproduce the "style" result. I can't understand how the authors achieved 68.82 recall@3 in their paper. Is there any thicks exist not mentioned in their paper?) I trained the model with the default parameters that the authors provided. The recall@3/5 is measured when the model shows the best f1@1 score during the training. The below is the result table.
I found that when I trained the model with Multiple GPUs (8) with 8 times of learning rate, the performance degrades severely. Pleas make sure that you find the correct learning rate and batch size when training your model.
I attached the evaluation curve of 1-GPU (lr=0.0001, batch=32) vs. 8-GPU (lr=0.0001 x 8, batch=32 x 8) The blue one is for the 1-GPU.