Closed prabhant closed 2 years ago
Hi @prabhant. Can you run it again with the arg inner_ot_debiased=True
? It should be 0 now. The explanation is that the label-to-label distances (the 'inner problem') also relies on entropy regularization, which introduces a bias term, and therefore might lead to d(a,a) >0. That flag controls whether the debiased version of OT is used for this inner problem too. I should probably set the default to True
.
Yes i get a zero now, thanks for explanation.
Hi,
I don't understand why the difference between the same distribution is greater than 0. MWE