Closed miraclebiu closed 7 years ago
Thank you very much for the suggestion! The default setting is 256 batch size on 4 GPUs. So that each GPU has a mini-batch of 64 images, which belongs to 16 identities each having 4 instances (--num-instances 4
). It is similar to the paper's setting.
I actually have tried using exactly the same setting with the paper (one GPU, 18 identities x 4 instances/id). The result is similar to the above default setting. Maybe I've missed something critical and need to wait for the authors' code.
Thanks! I will do some experiments too.
@Cysu I modify the market1501 split to maintain the standard protocol and train the model using exactly the same setting with the paper (one GPU, 18 identities x 4 instances/id), but I get rank1 78.5%. Could you give me some suggestions on reproducing the results of TriNet?
from the script when training with triplet loss , it doesn't give the explicit value of Batch_size , so the defalut batch_size of 256 is used ( I'm not quite sure) ? But in the paper the number is 18 ? Maybe it's the reason?