MarkusThill / bioma-tcn-ae

Minimal Working Example of a (baseline) Temporal Convolutional Autoencoder (TCN-AE) for Anomaly Detection in Time Series
41 stars 14 forks source link

Question about 2.6.3 Utilizing hidden representations for the anomaly detection task, 2.6.4 Feature map reduction #5

Open xignos3108 opened 1 month ago

xignos3108 commented 1 month ago

Hi, I am studying your research and have a few questions regarding your code.

In Section 2.6.3, "Utilizing hidden representations for the anomaly detection task," you took the blue bar from Fig. 3 and apply a 1 x 1 convolution layer to reduce the channel size from 16 (as shown in the figure) to 1.

However, from the provided explanation, it seems the 1 x 1 convolution layer is not included in the training loop. I am curious if using an untrained convolution layer to reduce the feature dimension is feasible or if it might negatively impact performance.

Additionally, in Section 2.6.4, "Feature map reduction," you place 1 x 1 convolutions after each dilated convolutional layer to reduce the feature map dimension. Based on Issue #3, my understanding is that these layers are optionally included in the training loop. Typically, adding additional layers (such as 1 x 1 convolutions) would increase training time and the number of parameters.

Could you please explain how adding these additional layers (1 x 1 convolutions) helps in reducing trainable parameters and training time?

Could you also provide the code for this part?

Thank you for your help :)