jaywalnut310 / glow-tts

A Generative Flow for Text-to-Speech via Monotonic Alignment Search
MIT License
660 stars 151 forks source link

Why you do not use pytorch native layer norm? #12

Closed hadaev8 closed 4 years ago

jaywalnut310 commented 4 years ago

There was no critical reason not to use native implementation of layer norm. You can replace my implementation with the native one. You may need to transpose the last two dimensions for replacing it.

When I decided to implement by my self, I just found the native layer norm operates on last dimensions, whereas I wanted it to operate on the channel dimension only (among the shape [batch_size, channel, time_step]).

Closing the issue.

hadaev8 commented 4 years ago

You can use it like this without transposing: nn.GroupNorm(1, channels)

hadaev8 commented 4 years ago

@jaywalnut310 Why dont you use pytorch vanilla gradient clipping?

jaywalnut310 commented 4 years ago

@hadaev8
I didn't know GroupNorm can replace my implementation of LayerNorm, thank you:) Actually, the gradient clipping is same with the pytorch vanilla clip_gradvalue. When I implemented it, I wanted to clip gradients by value, and at the same time, I also wanted to get the gradient norm. As executing two clipping methods (clip_gradvalue for gradient clipping, and clip_gradnorm for getting gradient norm) is time-consuming, I implemented the clipping method by my self.

hadaev8 commented 4 years ago

@jaywalnut310 I tried to start train (private russian dataset, text tokenization changed), but in fp16 model crush at some point: apex gradient scaling goes to zero. Any ideas why it should happen?

Also, does audio noise really make train better? Given, it divided by max wav value, noise value is on fp16 border.

jaywalnut310 commented 4 years ago

@hadaev8 It seems that you encountered in the problem of numerically unstable loss. What I could tell is, you can switch on/off add_noise and fp16_run configs. fp32 training could increase numerical stability, and audio noise is also unnecessary. I add audio noise just for dequantizing 16bit-quantized wav.

If the unstable training process is similar to https://github.com/jaywalnut310/glow-tts/issues/15, would you discuss in the issue?

hadaev8 commented 4 years ago

@jaywalnut310 What's the benefit of dequantization? This is sample spectrogram https://i.imgur.com/WL40jpX.png

And this is the difference between 2 spectrograms https://i.imgur.com/WITFuT1.png

So basically it affects no sound silence parts.

Also, shouldn't it be torch.rand_like(audio) - 0.5?