Closed amaletzk closed 1 year ago
Hi amaletzk,
Thanks for using our repo! You are right that for the paper we used weight normalization (https://pytorch.org/docs/stable/generated/torch.nn.utils.weight_norm.html), but this did not work well with PyTorch Lighting saving and loading of models, so we removed it here. We also found it didn't have any influence on the performance. On your second point, it seems to be that indeed ordinary convolutions were used in the decoder. Although the difference between transposed and ordinary convolutions is not that large in our architecture, this is something I will take a look into (if it makes a difference). For now I'll update the code to fix the mistake!
Dear authors,
thanks for publishing your great repo!
I tried to train my own VAE using your code, and although it works in principle, I stumbled upon several questions. Comparing the code to your European Heart Journal publication (https://doi.org/10.1093/ehjdh/ztac038), in particular the supplemental material, I realized a couple of differences:
ecgxai.network.causalcnn.modules.CausalConvolutionBlock
.ecgxai.network.causalcnn.decoder.CausalCNNVDecoder
has ordinary convolutions instead of transposed convolutions. To my understanding, this is because here theforward
parameter passed toCausalConvolutionBlock()
actually sets the value offinal
. Changing this by explicitly settingforward=forward
leads to an error when starting training: "RuntimeError: Trying to create tensor with negative dimension -512: [128, 64, 1, -512]". I can provide more details if needed.I'd be grateful if you could shed some light on this.