Closed piplyman closed 4 years ago
Hi, during gradient computation you have to backpropagate through the entire computation chain. You thus have to backpropagate through the filter taps to finally reach the two parameters of each filter. In these two parameters, you accumulate the gradients coming from all the elements of the filter. In other words, a little change in one of these parameters causes a bigger change in all the filter taps.
On Thu, 7 May 2020 at 23:40, piplyman notifications@github.com wrote:
SincConv has two parameters that need to update, but when backward, F.conv1d(...) updates all values of self.filters. Can I understand in this way?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/mravanelli/pytorch-kaldi/issues/224, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEA2ZVRCKPGGTHIN4T75O6LRQN5KHANCNFSM4M33JZPA .
SincConv has two parameters that need to update, but when backward, F.conv1d(...) updates all values of self.filters. Can I understand in this way?