ZhikangNiu / encodec-pytorch

unofficial implementation of the High Fidelity Neural Audio Compression
MIT License
129 stars 12 forks source link

AMP Loss -> Nan #8

Closed ZhikangNiu closed 8 months ago

ZhikangNiu commented 1 year ago

the amp training isn't stable. The loss will be Nan

ZhikangNiu commented 8 months ago

I found this bug is caused by the laplace_smoothing We can scale the epsilon to 1e-3 and reduce learning rate. It can work well by using fp16 !

ZhikangNiu commented 8 months ago

image

SaladDay commented 3 weeks ago

Hello, I have a doubt, why does the updated Nan in VQ affect the overall loss function?

ZhikangNiu commented 3 weeks ago

Hello, I have a doubt, why does the updated Nan in VQ affect the overall loss function?

It will cause task failure