fgnt / padertorch

A collection of common functionality to simplify the design, training and evaluation of machine learning models based on pytorch with an emphasis on speech processing.
MIT License
71 stars 16 forks source link

Move backwards step into model #64

Open jensheit opened 4 years ago

jensheit commented 4 years ago

In case of multiple chained models for example source separation + speech recognition it might be necessary to do intermediate backwards steps to reduce the required gpu memory during training. The user could be enabled to use mulitple backwards steps by moving the backwards step and the train_step into the model.

However, we have to consider the implications for the Hook post_step, which is at the moment called after train_step but before the backwards step.

Another open question is how to handle the timer information.

boeddeker commented 4 years ago

However, we have to consider the implications for the Hook post_step, which is at the moment called after train_step but before the backwards step.

This is currently done to decrease the memory consumption (i.e. after the post step we can delete the input, and the review).


Another point to consider is, that the multi gpu source code must be changed. I don't know if calling backward in a thread is allowed and recommended in pytorch.


I would say, we plan to implement it, when we see a demand for it.


A possible workaround (for those that need it now):

The code wouldn't be pretty, but it should do the task.