Closed sujit-dn closed 1 month ago
hey, @sujit-dn and thank you for your interest in OML!
I would say, bs=4 (n=2, l=2) is a kinda small. Do you have more memory on your GPU?
I like setting batch size as this:
due to inactivity
Hey @AlekseySh, We did trained model with semi hard triplet loss num labels =16, num instances per label=2. Found it did indeed improve overall performance of results (in terms of mAP and CMC score) as well as accuracy.
@sujit-dn I'm glad to hear it. Have a nice day!
I am currently training triplet loss with n=2, l=2 with margin 0.5 on custom dataset. Training model with ViTs16 and linear classification layer as projector. Model is able to converge as can be seen from classification loss. However during test time inference no. of matches produced are not very high. Model still gets confused and produces correct pair match with less confidence. Does increasing batch size by training with l=16/32, n=4/8 will help here?