NervanaSystems / deepspeech

DeepSpeech neon implementation
Apache License 2.0
222 stars 69 forks source link

How much faster GPU implementation is? #45

Closed beatthem closed 7 years ago

beatthem commented 7 years ago

I wanted to ask, how much faster will be training model on GPU? for example, core i5 (4 cores, 7200 bogomips) vs gt 1050, or gtx 1080? Is main time for training located in cost function computing?

Neuroschemata commented 7 years ago

The cost function takes up less than 10% of the total computational resources. The bulk of the computation is in the recurrent layers. Our model was trained on GPU. We did not benchmark training on CPU so we cannot tell you how much faster GPU is relative to CPU. However, we can tell you that the iteration time on CPU is much slower than on GPU.