Open ghost opened 3 years ago
@capavrulus I strictly following paper details here, although you can change dropout here as it's not mentioned in paper explicitly.
I have the same issue that after each layer of Postnet, the sequence length decrease, this lead to the y_g_hat have difference size and miss match with the ground truth ones.
To handle this problem, I change padding to 'same' in torch 1.9
hello @v-nhandt21, did you change the padding in every layer of the postnet model from padding=(n_filts - 1) // 2,
to padding=same
?
Thank you
hello @v-nhandt21, did you change the padding in every layer of the postnet model from
padding=(n_filts - 1) // 2,
topadding=same
? Thank you
Hi, bro: Have you tried this successfully? I also encountered the same problem, the dimensions are not correct
@SupreethRao99 @velonica0 I can not remember exactly what I have done with my code, I have cleaned it
But you can check the padding in this ConvNorm class: https://www.tutorialexample.com/keeping-the-shape-of-input-and-output-same-in-pytorch-conv1d-pytorch-tutorial/
we could try this to keep the same shape: https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html
hello @v-nhandt21, did you change the padding in every layer of the postnet model from
padding=(n_filts - 1) // 2,
topadding=same
? Thank youHi, bro:
Have you tried this successfully? I also encountered the same problem, the dimensions are not correct
Yes , I think i was able to get past the issue , but the model performance was horrible to say the least even after training with the full dataset on multiple GPU's for the full 1Million training steps , the models performance didn't improve, which is why I moved on
@SupreethRao99 @velonica0 I can not remember exactly what I have done with my code, I have cleaned it
But you can check the padding in this ConvNorm class:
we could try this to keep the same shape:
https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html
Hi, thanks. I resorted to using 'padding = same' to overcome the issue in PyTorch 1.12
@v-nhandt21 @SupreethRao99 Thank you for your help. I now use my own Raman spectrum data to train for one million steps, and the Gen Loss Total is 4.7. How can I reduce the loss? In addition, is there any related paper code that uses GAN for one-dimensional data denoising or restoration? I want to learn it, thank you very much.
I noticed that the postnet filter size is 32, which makes the output have different shapes than the input. Also, the dropout rate is so high that it's not learning anything meaningful. Is that supposed to be like this?