Call-for-Code / DroneAid

Aerial scout for first responders. DroneAid uses machine learning to detect calls for help on the ground placed by those in need.
Apache License 2.0
126 stars 59 forks source link

Create a model to detect hand-drawn "SOS" #7

Open krook opened 5 years ago

krook commented 5 years ago

Is your feature request related to a problem? Please describe. We assume that the person in need will have a kit with the printed symbols available. We should improve the system to demonstrate how a person could hand-recreate the symbols, and in turn, make the recognition more sensitive to those symbols.

krook commented 5 years ago

Very timely: https://www.cnn.com/2019/10/17/world/missing-australian-woman-sos-rescued-trnd/

anushkrishnav commented 3 years ago

I can work on this

krook commented 3 years ago

Thanks @anushkrishnav

sarrah-basta commented 2 years ago

Is this issue resolved, as I can see it open and unassigned? My proposal for the resolving it is: If we already have the models for our symbols from the kit ready, and a hand-written SoS is not one of them, it could be a good idea to instead use a pretrained model for handwritten letter recognition and integrate it with our present models.

krook commented 2 years ago

Hi @sarrah-basta. This hasn't been worked on yet. Please feel free to take a shot. Thank you! And I agree, if we can reuse a model that would be ideal. Maybe the Model Asset Exchange has something to build upon.

bhavyagoel commented 2 years ago

Hi, @krook hope you are doing well! As the issue is opened and unassigned, I can work on this. I would propose to simply determine the alphabets written on the ground, as it could be possible that someone in need might write some other information as well, like for reference "INJURED" or something else. And based on the determined text, we can classify it. For this, we can use a pre-trained model, and as mentioned by you Model Asset Exchange seems a good choice, we can also take help from mediapie for detecting if the human is present their or not, and in what state.

sarrah-basta commented 2 years ago

@krook @bhavyagoel yes as both of you'll have mentioned, we can use Optical character Recognition from Model Asset Exchange to detect the text. It can be further classified using either manually or using a classification model such as Naive Bayes classifier or the like. And adding the Face Detection Model from MediaPipe as mentioned would also be a huge plus.

krook commented 2 years ago

Thanks for your interest everyone. So since @sarrah-basta replied first to the latest request, why don't you take the first pass at it? If you need any feedback or review of the proposed approach, then you can tag @bhavyagoel and I. Sound like a plan?

sarrah-basta commented 2 years ago

Yep sure, I'll start by finalising the models among ones we were proposing and try looking at our codebase to understand how it can be integrated. Thanks

Thanks for your interest everyone. So since @sarrah-basta replied first to the latest request, why don't you take the first pass at it? If you need any feedback or review of the proposed approach, then you can tag @bhavyagoel and I. Sound like a plan?