Open zhaone opened 3 years ago
@zhaone Hi, have you found the reason? I just tried to train the network using the default settings, and I also found the training is around twice as slow as the paper described. It cost 14 hours for 20 epochs (11.5 hours for 36 epochs in the paper).
Here's my environment: 4 Titan RTX, batch size 128 (432), distributed training using Horovod.
Btw, one more thing I notice is that my log shows one epoch takes over 2440 while ~900 in the provided log file, and in #2 they report ~1200 (4 * RTX2080Ti). But the evaluation results are similar.
Here's my training log:
Epoch 20.000, lr 0.00100, time 2440.21
loss 0.5037 0.2001 0.3036, ade1 1.6102, fde1 3.5928, ade 0.7662, fde 1.1754
Provided log file:
Epoch 20.000, lr 0.00100, time 872.52
loss 0.5018 0.2001 0.3016, ade1 1.5967, fde1 3.5560, ade 0.7638, fde 1.1651
No, I have not solved this problem yet, but your speed is not so ridiculously slow compared with mine (3 times slower than yours). Have you checked where the speed bottleneck is? for example IO?
watch nvidia-smi
or watch gpustat
to see the gpu utilization while running code. The utilization is usually above 80%.htop
to see the cpu utilization, make sure you have sufficient cpu resource.@MasterIzumi i have the same question. And when i use free -h
, i see that the memory are exhausted. As i have 128G memory with 4 Titan XP GPU, i think it may use too much memory in the code ?
Hi, I recently want to reproduce your result and can get the metric your described in paper but I got a problems that the training (almost 3 days) than you described in paper (less than 12 hours).
Environment:
horovod
to pytorchDDP
since thehorovod
framework is really hard to set up (even with officialhorovod
docker I still got some errors I can't resolve)Did I do something wrong? I'm sure that I use
DDP
correctly and also sure that the bottleneck of training speed is optimization (not IO or something else). Have others met the same problems like me?