Open truonghaophu opened 6 years ago
@truonghaophu Did you figure out the answer to this? As I am working through this, I am wondering the same thing. Specifically when running the forward pass, I was noticing that two identical inputs (different problem with my training data) were producing different embeddings. When selected as part of a triplet where anchor and positive are identical inputs, they have a non-zero distance due to different embeddings calculated during the forward pass. It seems like for accurate training, identical inputs should have zero distance.
@davidsandberg Any thoughts/advice? Am I missing something?
Thanks.
@bzier I think because when set phase_train_placeholder: True
, batchnorm and dropout will be enable as you can see at here. And that mess everything up when inference.
is_training=True
will let batchnorm calculate using mean and variance from current batch. That mean 2 identical images in 2 difference batch will has difference embedding.
is_training=True
also activated dropout if you set keep_prob < 1.0. That mean some connection in your network will be randomly skipped and your result will not be consistent.
In my case, I think that we should set phase_train_placeholder: False
in this section.
Hi everyone, When training using triplet loss i found this very confusing:
I think that the phase_train_placeholder should be False, because when it set to True, the network will activate some drop out (in some case). And in this phase, we want to get embeddings to select triplet and I don't think using drop out is a good idea.