u39kun / deep-learning-benchmark

Deep Learning Benchmark for comparing the performance of DL frameworks, GPUs, and single vs half precision
429 stars 65 forks source link

Minibatch size when going to mixed precision #1

Open dimitry12 opened 6 years ago

dimitry12 commented 6 years ago

Thank you for excellent data points!

Can you estimate potential increase in minibatch size when going to mixed precision?

Nvidia claims memory usage should go down, but aren't specific.

In my experiments with Titan V (using Tensorflow and home-grown implementation of Transformer model) I can only increase batch size by about 10%, which is much less than I expected.

Thanks!