r9y9 / deepvoice3_pytorch

PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models
https://r9y9.github.io/deepvoice3_pytorch/
Other
1.96k stars 482 forks source link

Two problem about reducing needed memory in training #192

Open YihuaLiang95 opened 4 years ago

YihuaLiang95 commented 4 years ago

Hi, I found that needed memory of python train.py [options] keep rising in training process (about 1GB/100K steps). So I am trying to find out the reasons. Here are two parts of code that may lead to memory rising, but I am not quite sure about them.

  1. In train(), when calculate guided attention loss attn_loss, a matrix W is created in every step but not cleared after each iteration.
  2. In eval(), I think use with torch.no_grad(): and model.eval() in synthesisi.tts() is equal to build a new model which is called model_eval() for evaluation. If a new model must be built, maybe delete it before exit the function can reduce used memory?

If you got time, could you please give me some advice? Thanks.