Hi, I found that needed memory of python train.py [options] keep rising in training process (about 1GB/100K steps).
So I am trying to find out the reasons. Here are two parts of code that may lead to memory rising, but I am not quite sure about them.
In train(), when calculate guided attention loss attn_loss, a matrix W is created in every step but not cleared after each iteration.
In eval(), I think use with torch.no_grad(): and model.eval() in synthesisi.tts() is equal to build a new model which is called model_eval() for evaluation. If a new model must be built, maybe delete it before exit the function can reduce used memory?
If you got time, could you please give me some advice?
Thanks.
Hi, I found that needed memory of
python train.py [options]
keep rising in training process (about 1GB/100K steps). So I am trying to find out the reasons. Here are two parts of code that may lead to memory rising, but I am not quite sure about them.train()
, when calculate guided attention lossattn_loss
, a matrixW
is created in every step but not cleared after each iteration.eval()
, I think usewith torch.no_grad():
andmodel.eval()
insynthesisi.tts()
is equal to build a new model which is calledmodel_eval()
for evaluation. If a new model must be built, maybe delete it before exit the function can reduce used memory?If you got time, could you please give me some advice? Thanks.