Maghoumi / pytorch-softdtw-cuda

Fast CUDA implementation of (differentiable) soft dynamic time warping for PyTorch
MIT License
626 stars 59 forks source link

Incorrect Batch size for euclidian dist using normalization #30

Closed denisbeslic closed 1 year ago

denisbeslic commented 1 year ago

Dear author,

using your SoftDTW implementation with normalization mode (=SoftDTW divergence) throws an exception due to incorrect batch shape.

 File ".../softDTWLoss.py", line 109, in jacobean_product_squared_euclidean
    return 2 * (ones.matmul(Bt) * X - Y.matmul(Bt))
RuntimeError: The size of tensor a (128) must match the size of tensor b (384) at non-singleton dimension 0

I suspect there is a small mistake in the implementation:

if self.normalize:
    # Stack everything up and run
    x = torch.cat([X, X, Y])
    y = torch.cat([Y, X, Y])
    D = self.dist_func(x, y)
    out = func_dtw(X, Y, D, self.gamma, self.bandwidth)
    out_xy, out_xx, out_yy = torch.split(out, X.shape[0])
    return out_xy - 1 / 2 * (out_xx + out_yy)

I think line 275 needs to be changed to out = func_dtw(x, y, D, self.gamma, self.bandwidth)

Can you check if this is correct?