Open leo-du opened 4 years ago
I have the same issue. I can‘t repeat the synthetic experiments when I follow the step and parameters as mentioned in the paper. I don't know whether there are some other hyperparameters to define.
Hi and thanks for your interest! Sorry, I missed that issue. I haven't got around to clean up and provide the synthetic experiments yet. However, this is the code we used for applying Sinkhorn on the initial and refined assignment matrix:
def sinkhorn(x, x_mask=None, num_steps=0):
if x_mask is not None:
x = x.masked_fill(~x_mask, float('-inf'))
x = torch.softmax(x, dim=-1)
if x_mask is not None:
x = x.masked_fill(~x_mask, 0)
for _ in range(num_steps):
x = F.normalize(x, p=1, dim=-2)
x = F.normalize(x, p=1, dim=-1)
if x_mask is not None:
x = x.masked_fill(~x_mask, 0)
return x
I will let you know once synthetic experiments are included in the repository.
Hi, I have a follow up question: I saw that your sinkhorn
function has a num_steps
parameter. Does that mean you run the Sinkhorn iteration for a fixed number of iterations, or do you run it until convergence? Thanks!
We run sinkhorn for a very large number of fixed iterations.
Can you give me an order of magnitude if you still have it? Exact number would be better. Thanks!
We run sinkhorn both for 100 and 1000 iterations and they both perform equally well.
Hi,
Thanks for the amazing paper and code! This is not really an issue but I wonder if the authors can share instructions or code on how to reproduce the synthetic experiments section 4.1 (with Sinkhorn iterations). Any help or pointer are appreciated! Thanks!