Closed ghost closed 7 years ago
Generally speaking, maximizing the batch size to whatever your GPU supports will noticeably speed up training. You can also increase the width of the target field resulting in less redundant computations. If hardware is more constrained, you can train a smaller model, which will take significantly less time to train. For example, a 3 stack network with dilations = 7 and a target field of 801.
Thank you, I will try it out.
How do you edit the #stack network and dilations and target field
I like this framework. I tested both training and denoising phases and it works. I was wondering about ways to speed up the training phase? aside from using a GPU, what other tricks can I use to speed up the training or what ideal parameters to use, etc? thanks a lot for your time and research!