Harryoung / Test

0 stars 0 forks source link

Use paddleocr in GPU #29

Open Harryoung opened 9 months ago

Harryoung commented 9 months ago

我们提供了issue bot来帮助你解答问题,你是否想要它来解答(请填写yes/no)?/We provide issue bot to solve issues, do you want it? (Please write yes/no):yes

请注意:issue bot基于大语言模型实现,其回答仅供参考。目前issue bot尚不能解析issue中的图片。/Please note: The issue bot is implemented based on large language models, and its answers are for reference only. Currently the issue bot cannot parse images in issues.

请在这里描述你的问题/Please describe your issue here: How to use GPU in paddleocr? I have set the flag use_gpu = True in paddle ocr, and still it's using cpu. How can we make this right?

llm-issue-bot[bot] commented 9 months ago

#

Issue Response

To use GPU in PaddleOCR, you need to set the parameter --use_gpu true instead of --use_gpu false. Make sure you have a compatible GPU device and the necessary dependencies installed for GPU acceleration. Here is an example command to enable GPU usage:

paddleocr --image_dir ./imgs_en/img_12.jpg --use_angle_cls true --lang en --use_gpu true

If you have set the use_gpu flag to True in PaddleOCR but it is still using CPU, please ensure that you have followed the necessary steps correctly and that your GPU is properly configured.

Relevant Sources

Harryoung commented 9 months ago

@llm-issue-bot 如何查看我的GPU显存占用情况?

llm-issue-bot[bot] commented 9 months ago

Thanks for mentioning me! The answer to your question is:

要查看GPU显存占用情况,可以使用Nvidia的命令行工具nvidia-smi。该工具可以提供有关GPU的详细信息,包括显存使用情况。要查看显存占用情况,只需在终端中运行以下命令:

nvidia-smi

该命令将显示当前系统中所有GPU的信息,包括显存使用情况。您可以查看每个GPU的显存使用量以及总的显存使用情况。

Relevant Sources:

https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.7/test_tipc/docs/benchmark_train.md

How to use

My answer may not be accurate. I suggest you refer to relevant sources to make a comprehensive judgment. To invoke me for more questions, add a comment using following command: @llm-issue-bot \<QUESTION>

Harryoung commented 9 months ago

@llm-issue-bot My GPU's memory is small, how can I save memory when I train my model?

llm-issue-bot[bot] commented 9 months ago

Thanks for mentioning me! The answer to your question is:

To save memory when training your model with a small GPU memory, you can consider reducing the batch size. Additionally, you can try adjusting the learning rate. For example, when training with multiple GPUs, the learning rate can be multiplied by a certain factor. These approaches can help optimize memory usage while still achieving satisfactory accuracy.

Relevant Sources:

https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.7/ppstructure/layout/README.md, https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.7/doc/doc_en/distributed_training_en.md, https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.7/doc/doc_en/finetune_en.md

How to use

My answer may not be accurate. I suggest you refer to relevant sources to make a comprehensive judgment. To invoke me for more questions, add a comment using following command: @llm-issue-bot \<QUESTION>