An interactive web application developed with Streamlit, designed for making predictions using various machine learning models. The app dynamically generates forms and pages from JSON configuration files. ⭐ If you found this helpful, consider starring the repo!
🔍 Problem Description:
The problem we aim to address is the lack of easy accessibility to translate text that is embedded within images. Often, users encounter images containing text in foreign languages, and manually extracting and translating that text can be tedious and time-consuming. Providing an automated solution that can extract text from images, clean up the text areas, and translate it into the desired language will greatly enhance user experience and accessibility. This solution is necessary as it simplifies the process of understanding content from various languages, breaking down language barriers.
🧠 Model Description:
To implement the image translation feature, the following steps and tools will be utilized:
Text Segmentation: We will first perform text segmentation on the image to isolate the text from the background using image processing techniques. This helps in accurately identifying the text regions and improving the quality of extraction.
Text Extraction (Tesseract): We will use Tesseract, an open-source OCR (Optical Character Recognition) engine, to extract the text from the segmented areas. Tesseract is highly accurate and widely used for text extraction tasks, making it ideal for this purpose.
Image Inpainting (LaMa Cleaner): Once the text is extracted, we will use LaMa Cleaner, a powerful inpainting tool, to clean up the image where the text was. This ensures that the image remains visually appealing after the text is removed or translated.
Translation (Hugging Face Language Translation Model): For the translation of the extracted text, we will use a Hugging Face language translation model, which is pre-trained on multiple languages and provides highly accurate translations. This will enable seamless translation of the extracted text into the user’s preferred language.
This combination of text segmentation, extraction, inpainting, and translation makes the model robust and efficient for image translation.
⏲️ Estimated Time for Completion:
5 days
🎯 Expected Outcome:
Once implemented, this feature will allow users to upload images containing text in any language, automatically extract that text, clean up the image, and provide a translated version of the text in a selected language. The expected outcomes include:
A fully functional, integrated image translation tool that enhances accessibility.
Seamless user experience in understanding content from different languages.
Improved engagement and satisfaction for users interacting with multilingual content.
Additionally, the feature will enhance the website’s utility for users needing quick, accurate translations from image-based text.
📄 Additional Context:
To be Mentioned while taking the issue:
GSSOC,Hacktober
Note:
Please review the project documentation and ensure your code aligns with the project structure.
Please ensure that either the predict.py file includes a properly implemented model_details() function or the notebook contains this function to print a detailed model report. The model will not be accepted without this function in place, as it is essential for generating the necessary model details.
Prefer using a new branch to resolve the issue, as it helps keep the main branch stable and makes it easier to manage and review your changes.
Strictly use the pull request template provided in the repository to create a pull request.
🔍 Problem Description: The problem we aim to address is the lack of easy accessibility to translate text that is embedded within images. Often, users encounter images containing text in foreign languages, and manually extracting and translating that text can be tedious and time-consuming. Providing an automated solution that can extract text from images, clean up the text areas, and translate it into the desired language will greatly enhance user experience and accessibility. This solution is necessary as it simplifies the process of understanding content from various languages, breaking down language barriers.
🧠 Model Description: To implement the image translation feature, the following steps and tools will be utilized:
Text Segmentation: We will first perform text segmentation on the image to isolate the text from the background using image processing techniques. This helps in accurately identifying the text regions and improving the quality of extraction.
Text Extraction (Tesseract): We will use Tesseract, an open-source OCR (Optical Character Recognition) engine, to extract the text from the segmented areas. Tesseract is highly accurate and widely used for text extraction tasks, making it ideal for this purpose.
Image Inpainting (LaMa Cleaner): Once the text is extracted, we will use LaMa Cleaner, a powerful inpainting tool, to clean up the image where the text was. This ensures that the image remains visually appealing after the text is removed or translated.
Translation (Hugging Face Language Translation Model): For the translation of the extracted text, we will use a Hugging Face language translation model, which is pre-trained on multiple languages and provides highly accurate translations. This will enable seamless translation of the extracted text into the user’s preferred language.
This combination of text segmentation, extraction, inpainting, and translation makes the model robust and efficient for image translation.
⏲️ Estimated Time for Completion: 5 days
🎯 Expected Outcome: Once implemented, this feature will allow users to upload images containing text in any language, automatically extract that text, clean up the image, and provide a translated version of the text in a selected language. The expected outcomes include:
A fully functional, integrated image translation tool that enhances accessibility. Seamless user experience in understanding content from different languages. Improved engagement and satisfaction for users interacting with multilingual content. Additionally, the feature will enhance the website’s utility for users needing quick, accurate translations from image-based text.
📄 Additional Context:
To be Mentioned while taking the issue: GSSOC,Hacktober
Note:
predict.py
file includes a properly implementedmodel_details()
function or the notebook contains this function to print a detailed model report. The model will not be accepted without this function in place, as it is essential for generating the necessary model details.