OpenPecha / Requests

RFWs and RFCs for all OpenPecha repositories
0 stars 0 forks source link

[RFC0014] Automated Image-cropping Pipeline #30

Open Zakongjampa opened 2 years ago

Zakongjampa commented 2 years ago
Click here for Docs

Table of Contents

- [Housekeeping](#housekeeping) - [Named Concepts](#named-concepts) - [Summary](#summary) - [Reference-Level Explanation](#reference-level-explanation) - [Alternatives](#alternatives) * [Rationale](#rationale) - [Drawbacks](#drawbacks) - [Useful References](#useful-references) - [Unresolved questions](#unresolved-questions) - [Parts of the system affected](#parts-of-the-system-affected) - [Future possibilities](#future-possibilities) - [Infrastructure](#infrastructure) - [Testing](#testing) - [Documentation](#documentation) - [Version History](#version-history) - [Recordings](#recordings) - [Work Phases](#work-phases)

Housekeeping

RFC0014 Automated Image-cropping Pipeline ALL BELOW FIELDS ARE REQUIRED

Named Concepts

- prodigy: prodi.gy - Image: Images will be accessed from the s3 aws server with the bucket name `image-processing.bdrc.io` and And the system or model should work any type of images regardless of whether its in pecha format or modern publications format. - Pecha page: Refers to the traditional Tibetan book format in landscape orientation. In the context of this project, several Pecha page sides are captured in a single image.

Summary

BDRC has many images that contain several Pecha pages. We need to automate the image-cropping process with a custom computer vision model. This project will use Prodigy as a human-in-the-loop pipeline to create an initial training dataset, train a model, and iteratively improve it.

Reference-Level Explanation

**System Diagram:** ![Pasted image 20221114095616](https://user-images.githubusercontent.com/25195134/201636372-212d5731-1155-4826-8984-27c4baabf073.png) ```shell prodigy image.manual images_dataset ./images --label PECHA ``` In this command line: We used *the command-line interface* with a built-in *image.manual recipe* with *image_dataset* and manually write the boundary of each image in the *image_dataset*. ## Preparing the training dataset Here we manually write boundaries to each image for the training dataset. We can make it work faster by making it available to more people by deploying the model to a web using AWS and more people can take part in drawing the boundary around the PECHA at the same time. ## Check whether we have enough training datasets to train the model ```shell prodigy train-curve --son_on............ ``` It will print the accuracy figures and accuracy figures and accuracy improvements with more data. This recipe takes pretty much the same arguments as the train. ## Train the model *You can use the training recipe to train within the prodigy or outside using spaCy or other NLP packages* ```shell prodigy train --ner ds_GOLD ./tmp_model --eval-split 0.25 ``` *--ner* -> telling prodigy that you are doing a NER *ds_GOLD* -> name of the local dataset that has your manual annotation *./tmp_model* -> path to where prodigy will create your model *--eval-split* -> the train test split ratio is what you want prodigy to split the annotation in your dataset into. ## Human in the Loop Once we have a basic model, you can exponentially speed up the cropping process by letting the model try to do the rest of the image cropping. ```shell prodigy ner.teach corrected_db ./tmp_model ./local.jsonl --label PECHA ``` The model will take over and crop the rest of the dataset and binarizes the decision process into an ACCEPT or REJECT for you. If we notice that the model is not doing the cropping job well then this training dataset for cropping needs further correction by opting for the ner.correct recipe. ```shell prodigy ner.correct corrected_db ./tmp_model ./local.jsonl --label PECHA ``` ## Prodigy output Through the prodigy, we will get the coordinates of the border image ![Pasted image 20221114103852](https://user-images.githubusercontent.com/25195134/201636788-04526d07-9ca0-491d-bb74-7155d228e120.png) ## Image Cropping By using the Python Image Library (PIL) which provides the python interpreter with image editing capability. This Library can crop the image base on the coordinates that we have from the prodigy.

Alternatives

Manually cropping each image to the borders but BDRC doesn't have the human power to do this work manually.

Rationale

- Why the currently proposed design was selected over alternatives? - First manually cropping each image is a tedious job and we don't have the manpower to do it - Doing it without deploying it to the AWS will delay the completion date. - What would be the impact of going with one of the alternative approaches? - Based on my understanding this would be a better solution. - Is the evaluation tentative, or is it recommended to use more time to evaluate different approaches? - Yes

Drawbacks

Need AWS to host the image and a domain to make it available to other people on the internet to draw the boundary.

Useful References

-Prodigy [Prodi.gy](https://prodi.gy/docs/computer-vision) [Using prodigy for NLP text annotation](https://medium.com/mlearning-ai/using-prodigy-for-nlp-text-annotation-revolution-ai-for-spacy-e5561d93a361) [SpaCy v3.4 documentation ](https://spacy.io/usage/v3-4)

Unresolved Questions

- What is there that is unresolved (and will be resolved as part of fulfilling this request)? Prodigy is mostly used for Named Entity Recognition (NER) hence most of the documentation and online article are about the same. When I goes through the documentation they didn't specifically instruct the same with instruction for the image. Hence, there is no way of confirming that it will work the same for the image as well. - Are there other requests with the same or similar problems to solve? No to my knowledge.

Parts of the System Affected

- Which parts of the current system are affected by this request?: None - What other open requests are closely related to this request?: None - Does this request depend on the fulfillment of any other request?: No - Does any other request depend on the fulfillment of this request?: No

Future possibilities

- We can run the prodigy and the system built around it. We don't have to crop an image by ourselves. - The model will crop the image and save the image in the same format in a specified location.

Infrastructure

**Front end** - No need to do anything because the prodigy has a web interface to draw the rectangle or polygon shape onto the image coordinate. **Backend** - make a big enough training data so that the model will able to learn from it. - Train the model and get the JSONL file from the database using the db_out recipe - Based on the coordinates, crop the image, and based on the name of the image file rename it. - Save the cropped image in a directory of the S3 bucket.

Testing

We will measure the performance of the model by training the model and testing it on the remaining images of PECHA. Will check the accuracy by using the teach and correct recipe.

Documentation

Version History

Recordings

Work Phases

Non-Coding

Keep the original naming and structure, and keep it as the first section in the Work phases section

Implementation

A list of checkboxes, one per PR. Each PR should have a descriptive name that clearly illustrates what the work phase is about.

pipeline for processing and getting the images ready for the prodigy server

@ta4tsering

@ta4tsering

@Zakongjampa alternative method to stream s3 images into the prodigy using JSONL

Training prodigy image-cropping model

@Zakongjampa

@ta4tsering

@Zakongjampa

@ta4tsering

Tests

@ta4tsering

ngawangtrinley commented 1 year ago

Optimize images for Full HD on low bandwidth:

  1. Check the orientation
  2. If portrait: resize so height <= 1,080 px
  3. If landscape: resize so width <= 1,920 px
  4. Replace the decimal point in floats by an underscore in the resized file names. I.e. 1.5 --> 1_5 I1CZ17610227.tif --> I1CZ17610227f1_5.jpg
eroux commented 1 year ago

I think the initial concepts are a bit off:

ta4tsering commented 1 year ago

okay got it, the images we are dealing with are from the s3 server with the bucket name image-processing.bdrc.io not served by BDRC through IIIF. And the system or model should work any type of images regardless of whether its in pecha format or modern publications format.

eroux commented 1 year ago

yes, that's my point, that's why I thought the sentence

Image: Refers to photographed or scanned images, which are served by BDRC using the IIIF protocol

(towards the beginning) should be replaced