mblondel / soft-dtw

Python implementation of soft-DTW.
BSD 2-Clause "Simplified" License
534 stars 97 forks source link

Doesn't this loss function have the issue that the beginning time steps will get a much larger gradient than the final ones? #25

Open RuABraun opened 3 years ago

RuABraun commented 3 years ago

I want to confirm that the issues I'm experiencing are a fundamental issue with the loss and not my implementation (which is a slight modification of this).

It seems to me that because the final loss is a sum of different paths, changing the (0, 0,) entry in the cost matrix will cause a much larger change in the loss than changing a later entry, as changing (0, 0,) influences every other entry in the cost matrix. Some simple test cases seem to confirm this. Can someone else confirm?

v-nhandt21 commented 3 years ago

I want to confirm that the issues I'm experiencing are a fundamental issue with the loss and not my implementation (which is a slight modification of this).

It seems to me that because the final loss is a sum of different paths, changing the (0, 0,) entry in the cost matrix will cause a much larger change in the loss than changing a later entry, as changing (0, 0,) influences every other entry in the cost matrix. Some simple test cases seem to confirm this. Can someone else confirm?

Have you investigated the most robust and right soft DTW? I have tried it but the GPU is out of memory and I just can only set the batch size =1