PaddlePaddle / models

Officially maintained, supported by PaddlePaddle, including CV, NLP, Speech, Rec, TS, big models and so on.
Apache License 2.0
6.9k stars 2.91k forks source link

PaddleOCR 2.0.2 is slower than 1.1 using CUDA11.2 #5210

Open indrasweb opened 3 years ago

indrasweb commented 3 years ago

I finally managed to get the latest version running without segfaults by doing the following:

2) Update to cuda11.2 https://developer.nvidia.com/cuda-downloads 3) sudo docker run --name ppocr --gpus all -v $PWD:/paddle --shm-size=32G --network=host -it paddlepaddle/paddle:2.0.0rc1-gpu-cuda11.0-cudnn8 /bin/bash 4) python3.8 -m pip install paddleocr (no need to install paddlepaddle-gpu because the docker already has it)

However, inference speed is about 3x slower than using paddleocr==1.1 + paddlepaddle-gpu 1.8.5.post107

Also, how could I install outside of docker? It seems that the version of paddlepaddle-gpu installed in paddle:2.0.0rc1-gpu-cuda11.0-cudnn8 docker is not available on pypi - I've lost the output now but it was a version with post110 at the end, I think.

Xreki commented 3 years ago

There are some performance issue on CUDA11.0. We are fixing it.

LDOUBLEV commented 3 years ago

I guess your environment is CUDNN8 + CUDA11.2. We found that there is a performance degradation problem when using GPU to predict with cudnn8. It is recommended to use cudnn7 + cuda10

Also, how could I install outside of docker?

# reference https://www.paddlepaddle.org.cn/
python -m pip install paddlepaddle-gpu==2.0.0.post101 -f https://paddlepaddle.org.cn/whl/stable.html

BTW, paddle2.0rc1 version does not support cuda11.2. If U have more question about PaddleOCR, you can open an issue in OCR Issue