yashasvini121 / predictive-calc

An interactive web application developed with Streamlit, designed for making predictions using various machine learning models. The app dynamically generates forms and pages from JSON configuration files. ⭐ If you found this helpful, consider starring the repo!
https://predictive-calc.streamlit.app/
MIT License
20 stars 47 forks source link

Addition of Image Translation Model Feature #114

Open UTkARsh-RaJ01 opened 1 day ago

UTkARsh-RaJ01 commented 1 day ago

🔍 Problem Description: The problem we aim to address is the lack of easy accessibility to translate text that is embedded within images. Often, users encounter images containing text in foreign languages, and manually extracting and translating that text can be tedious and time-consuming. Providing an automated solution that can extract text from images, clean up the text areas, and translate it into the desired language will greatly enhance user experience and accessibility. This solution is necessary as it simplifies the process of understanding content from various languages, breaking down language barriers.

🧠 Model Description: To implement the image translation feature, the following steps and tools will be utilized:

Text Segmentation: We will first perform text segmentation on the image to isolate the text from the background using image processing techniques. This helps in accurately identifying the text regions and improving the quality of extraction.

Text Extraction (Tesseract): We will use Tesseract, an open-source OCR (Optical Character Recognition) engine, to extract the text from the segmented areas. Tesseract is highly accurate and widely used for text extraction tasks, making it ideal for this purpose.

Image Inpainting (LaMa Cleaner): Once the text is extracted, we will use LaMa Cleaner, a powerful inpainting tool, to clean up the image where the text was. This ensures that the image remains visually appealing after the text is removed or translated.

Translation (Hugging Face Language Translation Model): For the translation of the extracted text, we will use a Hugging Face language translation model, which is pre-trained on multiple languages and provides highly accurate translations. This will enable seamless translation of the extracted text into the user’s preferred language.

This combination of text segmentation, extraction, inpainting, and translation makes the model robust and efficient for image translation.

⏲️ Estimated Time for Completion: 5 days

🎯 Expected Outcome: Once implemented, this feature will allow users to upload images containing text in any language, automatically extract that text, clean up the image, and provide a translated version of the text in a selected language. The expected outcomes include:

A fully functional, integrated image translation tool that enhances accessibility. Seamless user experience in understanding content from different languages. Improved engagement and satisfaction for users interacting with multilingual content. Additionally, the feature will enhance the website’s utility for users needing quick, accurate translations from image-based text.

📄 Additional Context:

To be Mentioned while taking the issue: GSSOC,Hacktober

Note:

Teja-m9 commented 4 hours ago

Hey @UTkARsh-RaJ01 please assign this to me

UTkARsh-RaJ01 commented 4 hours ago

@yashasvini121 please assign this task to me under gssoc