argman / EAST

A tensorflow implementation of EAST text detector
GNU General Public License v3.0
3.01k stars 1.05k forks source link

Can I detect the rotation of the text? If so, how do I do that? #368

Open joaossmacedo opened 3 years ago

joaossmacedo commented 3 years ago

Useful:

HimanchalChandra commented 3 years ago

@joaossmacedo Can you please share your github repo for multiple orientations model, I need that for my project

joaossmacedo commented 3 years ago

I've found a solution that works but it's sub-optimal.

First of all, there is a limitation. It will only work if the angle is 0º, 90º, 180º or 270º.
Secondly, it will increase the process duration.

The idea

  1. Detect boxes;
  2. Crop the image according to a box;
  3. Check if height > width. If it's rotate 90º to make the text horizontal;
  4. Run the cropped image through the recognition model;
  5. If the score is low, rotate the image 180º;
  6. Run through the recognition model;
  7. Compare to the results and use the better one;

The code

boxes = detect(detection_model, img, 0.7)

img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

for box in boxes:
    cropped_image = crop_image(img, box)

    if cropped_image.size == 0:
        continue

    if cropped_image.shape[0] > cropped_image.shape[1]:
        cropped_image = cv2.rotate(cropped_image, cv2.ROTATE_90_CLOCKWISE)

    predict, probability = recognize(recognition_model, cropped_image)

    if probability < 0.8:
        cropped_image = cv2.rotate(cropped_image, cv2.ROTATE_180)

        new_predict, new_probability = recognize(recognition_model, cropped_image)

        if new_probability > probability:
            predict = new_predict
            probability = new_probability

Alternative idea

One idea that we explored but didn't end up using was to run through Tesseract to get it's angle. We decided not to use this method because we would need to add Tesseract to our project and, in our case, it was faster to recognize than to detect.

mohammedayub44 commented 3 years ago

@joaossmacedo does the EAST model out of the box detect text region of any orientation or you had to make some changes to do it ?

Currently, I have trained it on some synthetic images for about 15k steps now and it doesn't seems to detect in all the orientations. I started from resnet checkpoint. I don't think amount of data is a problem since the training data is about 800,000 samples. Do I just keep training for more steps ?

joaossmacedo commented 3 years ago

In the project that I used EAST on, the data was also synthetic but only had text on 0º, 90º, 180º and 270º degrees. It was able to detect the text on all of those orientations.

I didn't use any previous checkpoint so I can't comment on that specifically. However, I believe it should be able to detect text in all orientations as evidenced by the images on the README.

I'm sorry I couldn't be more helpful.

mohammedayub44 commented 3 years ago

@joaossmacedo Thanks. That's makes sense. I'll poke around a bit more.

mohammedayub44 commented 3 years ago

Here are couple of results using eval.py script on my trained model. It's very basic one using all defaults. (no changes made). The bounding boxes looks like they are not rotated. My guess is eval.py does not make use of angle information to rotate and plot the bounding boxes ?

pics are from icdar15 test set -

image

image

SabraHashemi commented 3 years ago

hi, i also checked rotation, it dosn't have rotation correction in text detection, i think for adding this option, you should have box of every alphabet , also for some text on red background it didnt work well.