According to this article, it's possible to significantly improve LSTM performance using CUDA GPUs. Since Tesseract 4.x uses new LSTM-based core, is it true that it should perform better with CUDA-powered GPUs?
If yes, it would be helpful to have a description (or a note at least) in documentation.
Hello,
According to this article, it's possible to significantly improve LSTM performance using CUDA GPUs. Since Tesseract 4.x uses new LSTM-based core, is it true that it should perform better with CUDA-powered GPUs?
If yes, it would be helpful to have a description (or a note at least) in documentation.