Closed b1ueshad0w closed 6 years ago
Referring to readme file in tools directory:
Average batch time avgBatch: average training time for one batch of date. Note that if there are more than one GPU, the batch size is n*args.batchSize since the input argument args.batchSize is for each GPU core. If new tool doesn't measure the batch batch time directly, you need to convert other metrics into seconds/batch here.
@heehyuncho95 Thanks for ur explanation!
I ran the AlexNet+TensorFlow benchmark on both single GPU and multiple GPU with the same arguments:
average_batch_time
of the single GPU case was: 0.0214789186205average_batch_time
of the multiple GPU case was: 0.0523074957789 Single GPU cost nearly half less time compared to two GPUs. This is impossible. There must be something wrong. Can anyone help?