paul-krug / pytorch-tcn

(Realtime) Temporal Convolutions in PyTorch
MIT License
79 stars 9 forks source link

Different convolutions within same TCN layer #14

Open FabianB98 opened 5 months ago

FabianB98 commented 5 months ago

Hi,

I'm currently trying to implement the network architecture described in the paper "User-Driven Fine-Tuning for Beat Tracking" by Pinto et al., 2021. Within this network architecture, the authors propose the usage of a TCN where each TCN layer uses two separate sets of dilated convolutions, where one of the dilated convolutions has a dilation of twice that of the first dilated convolution. In figure 2 of that paper, they depict their TCN layout as follows:

Bildschirmfoto vom 2024-05-22 14-24-49

As you can see, there are two dilated convolutions per TCN layer: "Dilated Convolution 1" with a dilation of dr1, and "Dilated Convolution 2" with a dilation of dr2 = 2 * dr1. The results of these dilations are then concatenated before the activation function, dropout and a 1x1 convolution (as a way of keeping the dimensionality equal throughout the TCN layers) are applied.

From what I could find so far, it appears as if this package only supports a single dilation rate within each TCN layer, which leads me to believe that it is not possible to implement this architecture using this Python package. Is my understanding of this correct? Or am I simply missing something (potentially obvious) and it is possible to implement the proposed network architecture with this package?

paul-krug commented 5 months ago

The architecture of the residual block is indeed different from the one implemented in this package. However, you could fork the repo and just modify the temporal block in order to include two parallel convolutions with different dilation rates. That should be relatively straight forward.

FabianB98 commented 5 months ago

Thank you for your response. I'll adjust the temporal block in a fork later this week or next week when I find time to do so. Would you mind if I try to incorporate these changes in a non-API-breaking way such that we could merge them in a PR?

FabianB98 commented 4 months ago

It may have taken a bit longer than anticipated, but I think my changes are now ready to be reviewed in a PR (see #15). It took a bit longer than I initially hoped, but I wanted to be sure that my changes work by training a network with these changes made to the package (and I actually found a few things which I tweaked throughout). Who could have thought that training a neural network on just a regular consumer GPU may take a while :D