mblondel / soft-dtw

Python implementation of soft-DTW.
BSD 2-Clause "Simplified" License
550 stars 98 forks source link

Loss with batching in dnns? #17

Closed ashutoshsaboo closed 5 years ago

ashutoshsaboo commented 5 years ago

Hi, I've been looking at this for quite some time now, but is there a way to extend this loss function with mini batches in neural networks? I came across the Pytorch project as well - here - but can't understand how to use it there as well. Here's a small issue that I came across, and would really appreciate any inputs to accomplish this: For usage in PyTorch, I found the method dtw_value or dtw_grad here : link - but seems like it needs a theta array - is it supposed to be the pairwise distance matrix between time series in the minibatch, or what is it exactly?

If it's the former I think that dtaidistance package could achieve it : distance_matrix_fast method here. But it gives a (n,n) square ndarray unlike what's mentioned in the docs of the method as mentioned above -

:param theta: _numpy.ndarray, shape = (m, n),
        Distance matrix for DTW

If it's anything else could you suggest how to get the distance matrix? Any suggestions would be really helpful. Thank you! 😄 @mblondel

mblondel commented 5 years ago

I replied here: https://github.com/arthurmensch/didyprog/issues/5

ashutoshsaboo commented 5 years ago

Hi @mblondel sorry was still confused, I commented on that issue again: would be great if you could help! 😄