-
Epoch: [12][ 380/1161] Time 0.496 (0.498) Acc@1 54.69% (60.06%) cross_entropy 3.617 (4.314) softmax_triplet 2.303 (3.466)
Epoch: [12][ 390/1161] Time 0.493 (0.498) Acc@1 57.03% (60.…
-
i run your code with new dataset(MSMT17),but it occured :
[INFO] Making model...
[INFO] Making loss...
1.000 * CrossEntropy
1.000 * Triplet
[INFO] Epoch: 1 Learning rate: 2.00e-04
[INFO] [1/160]…
-
Hi @KaiyangZhou ,
I have custom datasets with 40 to 60 unique individuals. I am applying 2-step transfer learning and fine tuning pre-trained model OSNet AIN trained on MSMT.
I have following q…
-
### 🐛 Describe the bug
For some reasons, I need to discard part of the data in the collate_fn of the dataloader, which makes my batch size change. My program gets stuck in the loss function when the …
-
Many thanks to all the contributors for this project.
In your paper https://arxiv.org/pdf/1905.00953.pdf you use the cross-entropy loss as the main objective and the triplet loss as an auxiliary lo…
-
Hello!
I could run successfully the face recognition part.
However, running your code for the voice recognition part is raising problems. Could you please share your dataset and triplet_loss_trai…
-
@Klitter Hi, I am trying to achieve the dynamic training, but I meet some problem. As the paper says, each iteration choose one sample method in ID-balanced hard triplet and randomized. So I want …
-
Hi @wanji ,
could you please clarify the loss that is minimized in `BatchTripletLossLayer` layer and point to a paper that is explaining it?
In particular what is the `mu` parameter in
> layer {
> …
-
Code and run a triplet loss experiment for STS.
We can focus on euclidean distance for now, so we test as many functions as we can. Later we'll do experiments with cosine distance too.
-
I am try to reproduce the results on market1501. But the triplet loss is very small, nearly to zero. So after iter=35000+, the network did not update(loss=0). I can't reproduce the results(R@1:81%~,mA…