I suppose it is the expected behavior but the paper (https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf, oneshot1.pdf) says, the former part of the proposed loss function is:
which seems to be the opposite number of binary_crossentropy and confuses me a lot. Does the author mean:
Or could you please provide some more explanation?
Hi, there!
Thanks for your great work, it helps a lot and the application of siamese networks for one-shot learning also deserves further research.
However, I find you implement the loss function as
binary_crossentropy
, in the following way:https://github.com/tensorfreitas/Siamese-Networks-for-One-Shot-Learning/blob/8aae45687b3f0d1499bf8bf804d7a741b7828eb2/siamese_network.py#L144
I suppose it is the expected behavior but the paper (https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf, oneshot1.pdf) says, the former part of the proposed loss function is: which seems to be the opposite number of
binary_crossentropy
and confuses me a lot. Does the author mean:Or could you please provide some more explanation?
Thanks&Best Regards