using your SoftDTW implementation with normalization mode (=SoftDTW divergence) throws an exception due to incorrect batch shape.
File ".../softDTWLoss.py", line 109, in jacobean_product_squared_euclidean
return 2 * (ones.matmul(Bt) * X - Y.matmul(Bt))
RuntimeError: The size of tensor a (128) must match the size of tensor b (384) at non-singleton dimension 0
I suspect there is a small mistake in the implementation:
if self.normalize:
# Stack everything up and run
x = torch.cat([X, X, Y])
y = torch.cat([Y, X, Y])
D = self.dist_func(x, y)
out = func_dtw(X, Y, D, self.gamma, self.bandwidth)
out_xy, out_xx, out_yy = torch.split(out, X.shape[0])
return out_xy - 1 / 2 * (out_xx + out_yy)
I think line 275 needs to be changed to
out = func_dtw(x, y, D, self.gamma, self.bandwidth)
Dear author,
using your SoftDTW implementation with normalization mode (=SoftDTW divergence) throws an exception due to incorrect batch shape.
I suspect there is a small mistake in the implementation:
I think line 275 needs to be changed to
out = func_dtw(x, y, D, self.gamma, self.bandwidth)
Can you check if this is correct?