When I train the source code with one GPU of TitanX which does not have distributed training, I calculate that the training for 300 epochs at least takes 30.5days. Do you have suggestions that make it faster? I would be very appreciate if you could response me.
Hi authors,
When I train the source code with one GPU of TitanX which does not have distributed training, I calculate that the training for 300 epochs at least takes 30.5days. Do you have suggestions that make it faster? I would be very appreciate if you could response me.
Thanks~~