tgc1997 / RMN

IJCAI2020: Learning to Discretely Compose Reasoning Module Networks for Video Captioning
79 stars 12 forks source link

Question about training time,thanks #3

Closed WangLanxiao closed 4 years ago

WangLanxiao commented 4 years ago

I use 8 GPU with 32batchsize, I trained 3epoch whiching need 11 hours. how long did you use to train 20epoch thanks for your work!

tgc1997 commented 4 years ago

For MSR-VTT (batch_size=48), it takes about 21hours/10epochs with 8 GTX 1080ti GPUs. We trained the model in a GPU cluster, it should be faster if you have your own machine. Most of the training time is spent for data I/O, so if your machine's memory is large enough, you can preload the data in the memory which may help you training faster. And if you find other ways to speed up data I/O in your follow-up study , you are welcome to make suggestions.