Bartzi / see

Code for the AAAI 2018 publication "SEE: Towards Semi-Supervised End-to-End Scene Text Recognition"
GNU General Public License v3.0
575 stars 147 forks source link

Is GPU necessary ? #34

Closed eler closed 6 years ago

eler commented 6 years ago

“The training can be run on GPU or CPU”,but “the code does currently not work on CPU” I'm confused if the GPU is necessary.( I don't have any Nvidia-GPU)

Bartzi commented 6 years ago

The training can indeed be run on a CPU. But I'm not 100% sure whether it works out of the box, as there might be some code that does not work when run only on GPU.

But I do not recommend running on CPU, as this will take a very very long time.

eler commented 6 years ago

I got it! Thanks for your advice

FelixSchwarz commented 6 years ago

Sorry to "hijack" this ticket but I have a follow-up question: The SEE paper states that you used a single machine with 4 TITAN X cards. How long (roughly) did you need to train for your SVHN/FSNS results? Do you remember how much RAM (CPU/GPU) you needed during the training runs?

I'm especially curious because Ray Smith (2016/2017) used "40 parallel training workers" (though admittedly there is no reference to specific hardware afaik).

Bartzi commented 6 years ago

Although we had a system with 4 GPUs available we mostly only used two at the same time. With two GPUs at the same time training for FSNS took around 23 hours for one run and we did around 2-3 runs to get decent results. While running the training we used a batchsize of 20 per GPU, leading to an overall batch size of 40. This filled the GPU quite nicely with around 11GB of RAM usage.

For our SVHN trainings we needed less time, as the network was not as complicated and the input was smaller, this also means that we could use a larger batch size for these experiments.

I hope that gives you an impression :smile: