Hello,
The dilated residual layer in Fig. 2 includes a 1*1 convolution after the ReLU activation. However, I cannot find the explanations of the role of this 1*1 convolution. Is this 1*1 Conv used to introduce more parameters to improve the expressiveness of the TCN?
Hello, The dilated residual layer in Fig. 2 includes a 1*1 convolution after the ReLU activation. However, I cannot find the explanations of the role of this 1*1 convolution. Is this 1*1 Conv used to introduce more parameters to improve the expressiveness of the TCN?