Open daeus opened 5 years ago
Vision Statement: We are working with Software Engineers and Data Scientists to build an open source auto-captioning tool to blind people so that they can use “see” the images on the internet. It's also enabling developers easier way to integrate auto-captioning as a part of web accessibility.
Soapie is the most simple image auto-captioning model and tool for dev to generate caption for blind people as a part of accessibility goal. Although the model isn't new in the academic field, it is still not popular in actualising it as a daily tool that most people can access to. We want to build a very simple one as an intiation.
In order to achieve the goal, we have defined the following milestones to achieve.
Great initiative! I really interested to understand more on how this will be achieved.
Very interested on this! I would love to get a hint on how you plan to do the auto-captioning and if there is a way to create some context aware captioning. With this I mean, rather than analyzing the image and put some tags like "dog and man wearing blue", it could link it with the content text. So if the article is talking about "John loves dogs" then the caption would read "John is wearing blue and walking his dog Mozzy".
Other than this, very nice vision statement!
Great project and great roadmap, I will be following this closely!
Open Canvas: https://docs.google.com/presentation/d/1te3tF-5N03oAU44CFYu1yTQvfXLomi1mVIOG4SZXgU8/edit?usp=sharing
Your project is incredibly interesting! Is the idea that web page developers will then be able to incorporate this code into their pages?
Very interested on this! I would love to get a hint on how you plan to do the auto-captioning and if there is a way to create some context aware captioning. With this I mean, rather than analyzing the image and put some tags like "dog and man wearing blue", it could link it with the content text. So if the article is talking about "John loves dogs" then the caption would read "John is wearing blue and walking his dog Mozzy".
Other than this, very nice vision statement!
Thanks @victordiaz Context aware captioning is exactly what I think we need. However, it seem with the current available technology, it's still far from prefect. I am guessing it will also involve some NLU for understanding the context. It will be great if you have any directions that can point us to.
Open Canvas: https://docs.google.com/presentation/d/1te3tF-5N03oAU44CFYu1yTQvfXLomi1mVIOG4SZXgU8/edit?usp=sharing
Your project is incredibly interesting! Is the idea that web page developers will then be able to incorporate this code into their pages?
Thanks @jmtaylor86 Yes hopefully, we are still designing what would be the best to implement this so it will be user friendly for web developers. If you have any ideas, please feel free to let us know.
Project Lead: @cheukting @daeus
Mentor: @petergrabitz
Welcome to OL7, Cohort D! This issue will be used to track your project and progress during the program. Please use this checklist over the next few weeks as you start Open Leadership Training :tada:.
Before Week 1 (Jan 30): Your first mentorship call
Before Week 2 (Feb 6): First Cohort Call (Open by Design)
Before Week 3 (Feb 13): Mentorship call
Before Week 4 (Feb 20): Cohort Call (Build for Understanding)
Week 5 and more
#mozsprint
This issue is here to help you keep track of work as you start Open Leaders. Please refer to the OL7 Syllabus for more detailed weekly notes and assignments past week 4.