RapidAI / RapidOCR

📄 Awesome OCR multiple programing languages toolkits based on ONNXRuntime, OpenVINO and PaddlePaddle.
https://rapidai.github.io/RapidOCRDocs
Apache License 2.0
3.08k stars 367 forks source link

RapidOCR Error - Leaked Semaphore Objects & OOM Killer #231

Open BennisonDevadoss opened 1 month ago

BennisonDevadoss commented 1 month ago

Problem Description:

While processing a large number of images (approximately 1000) using RapidOCR, I encountered the following errors midway through the process:

  1. Leaked Semaphore Objects: "There appear to be 1 leaked semaphore object(s) to clean up at shutdown."
  2. Process Killed by OOM Killer: "The process of this unit has been killed by the OOM killer."

System Information:

Reproducible Code:

from typing import Sequence, Union, Iterable
import numpy as np

def extract_from_images_with_rapidocr(
    images: Sequence[Union[Iterable[np.ndarray], bytes]],
) -> str:
    try:
        from rapidocr_onnxruntime import RapidOCR
    except ImportError:
        raise ImportError(
            "`rapidocr-onnxruntime` package not found, please install it with "
            "`pip install rapidocr-onnxruntime`"
        )
    ocr = RapidOCR()
    text = ""
    for img in images:
        result, _ = ocr(img)
        if result:
            result = [text[1] for text in result]
            text += "\n".join(result)
    return text

Research & Findings:

These errors seem to be related to memory leaks during batch image processing. I am uncertain about how to resolve these issues within RapidOCR, especially when handling large numbers of images.

Additional Questions:

  1. Are there any memory management techniques or best practices for handling large image batches in RapidOCR?
  2. How can I optimize memory usage to prevent OOM killer termination?
  3. Is there a way to monitor memory consumption or manage semaphore objects during the process?
  4. Would changing the version of RapidOCR (upgrading/downgrading) help resolve this memory-related issue?

Any guidance or solutions would be greatly appreciated!

SWHL commented 1 month ago

I guess that some of the 1000 images are large in size, which causes the memory request to exceed the limit when recognizing these images. At present, it is recommended to check the images sent for recognition to see if there are any images with particularly large sizes, such as 4000x7000. It is recommended to resize them in advance before sending them for OCR recognition.

Later, I will add this logic in the code to control the memory from exceeding the limit.

BennisonDevadoss commented 1 month ago

@SWHL, Thank you for the response! I have couple of follow-up questions based on your suggestions:

  1. What would be the recommended target resolution for images to prevent memory overload during OCR processing? Is there an optimal balance between image size and OCR accuracy?
  2. Could you share more details about the memory control logic you plan to add? Will this logic automatically resize or manage large images, and will it be included in a future release of RapidOCR?
SWHL commented 1 month ago

These two points are already under development, please refer to the develop branch, and they will be updated to the new version soon.

SWHL commented 1 month ago

You can try it again with the rapidocr_onnxruntime==1.3.25

BennisonDevadoss commented 2 weeks ago

@SWHL, Thanks for your update, I tried with the version 1.3.25, but it does not work for me. I am facing the same issue.

SWHL commented 2 weeks ago

Can you confirm if there are any fixed ones among the 1000 that will trigger OOM issues? If it can be stably reproduced, please provide this image.

BennisonDevadoss commented 1 day ago

@SWHL, I believe this issue might be related to image dimensions. In my experience, the OOM killer was triggered when the image dimension width was in 1px width and 602px height. To clarify, I’d like to understand what the minimum and maximum required dimensions for width and height are.

Additionally, I have a suggestion for improving the plugin: it would be helpful to implement an internal image size check. If an image’s dimensions are outside the required range, the plugin could resize it. If resizing isn’t possible, the image could be skipped during the OCR process.

This approach could be particularly beneficial when the plugin is integrated with others, such as LangChain. For instance, LangChain’s PDF loader (when extract_image is set to true) uses RapidOCR internally for OCR. Since we cannot predict the dimensions of images embedded in a PDF, having a dimension check before processing each image would make the workflow more robust.

Additionally I have attached that sample 1px width image here page_313_image_1141

SWHL commented 1 day ago

Thanks for the suggestion. There is definitely something wrong with the image resizing here. The current image processing mainly goes through the following functions: https://github.com/RapidAI/RapidOCR/blob/62bc4871a0c5ff096fdf600cce1ee53b672e4d83/python/rapidocr_onnxruntime/main.py#L129-L140

The original image width is 1px and height is 602px. After preprocess, img shape: hegith=18048px width=32px

Enter the following function: https://github.com/RapidAI/RapidOCR/blob/62bc4871a0c5ff096fdf600cce1ee53b672e4d83/python/rapidocr_onnxruntime/main.py#L142-L159

Before entering the text detection model, the image width is always 32px and the height is 18048px, so it will trigger the OOM problem.

I'm thinking about how to avoid this problem. Or how to avoid this kind of image before sending it to OCR. Welcome to communicate.