open-source-ideas / ideas

💡 Looking for inspiration for your next open source project? Or perhaps you've got a brilliant idea you can't wait to share with others? Open Source Ideas is a community built specifically for this! 👋
6.55k stars 223 forks source link

An app to help guide blind people navigate in walking. #188

Open sunn-e opened 5 years ago

sunn-e commented 5 years ago

Project description

I was thinking of an android/ios app that can help people with visual problems to navigate through crowd better.We can use phone's camera to see from which side more people are coming on a footpath and give voice instructions to user to change their direction.

Relevant Technology

I'm open to any technology that will make it happen as soon as possible. We can use machine learning solutions to analyze from which side people are coming and going. I know this would be limited to the field of view of smartphone camera but then we can take it further to 360 cameras which will help better navigation as tech gets cheaper.

[Write what technology is relevant. What language, what platform, any particular library/framework/existing project it is based on?]

Complexity and required time

There are many tensorflow APIs to assist us with the object detection task. I think it can take anywhere from a week to an year. [Please only tick off one box in each category by changing [ ] to [x]. The labels on the project will then be updated by the maintainers as soon as possible.]

Complexity

Required time (ETA)

Categories


Update 0: We have started working. Currently two apps are in development with many to come. We are looking for volunteers for the good in society. Please visit https://github.com/aamba to know more.

sunn-e commented 5 years ago

Feel free to ask any question.

ghost commented 5 years ago

I have trouble understanding how the app should be used. A user will point the smartphone camera in front of his face and a voice will then tell him to move to the left to avoid collision with other ppl?

sunn-e commented 5 years ago

Yes. Exactly. The phone will be in front of them either in hand or on some kind of accessory. For output,voice or haptic feedback patterns can help too.

ghost commented 5 years ago

Since the User is blind or has severe visual impairment, Settings (like Output through voice or haptic feedback) and switching to camera mode would be controlled via voice?

sunn-e commented 5 years ago

That will not be our primary focus. Starting an app is very easy thing. I have a relative who can use his old Samsung android phone very well despite being completely blind. He uses "talkback" mode under accesibility setting option. I found his cool article (https://www.insider.com/how-blind-people-use-smartphones-2017-2)

ghost commented 5 years ago

Nice that sounds great. I have thought about the way the app communicates collision. I think it should work like a car assistant (beep or vibrate when you are about to hit someone) Because currently, I cant imagine how the voice would work if a person walks diagonal since afaik tensorflow doesnt support guessing the direction a person walks? Also, there are some problems with voice as feedback What if the User cant move neither left nor right without colliding with other ppl? And how often would the voice give a new command?

anshulxyz commented 5 years ago

This is a very interesting project.

how often would the voice give a new command?

I think a regular interval of approx 5 seconds, saying "keep walking straight" would be nice than total silence until something occurs since it'll tell the person that:

  1. He is on the right path
  2. The path in front is clear
  3. And the software is working
sunn-e commented 5 years ago

You both are correct. You guys understood what I'm trying to do. Should we begin? At least we can start with a basic Tensorflow api Android app. As time passes, we can come up with some novel approaches or train with our own dataset. I'm currently working on two research papers based on object detection, that is the reason I thought my resources might be useful. I have all major cloud platforms to test any models. Interested?

anshulxyz commented 5 years ago

I am interested. Currently working on some projects at work. One of them involves object detection in images, we've been using YOLOv3. I would like to learn about the depth detection, such as how far the person in front of us is, is he in the colliding distance. Or at what speed is he coming towards us.

ghost commented 5 years ago

I'm interested aswell but quiet busy till the end of next week

sunn-e commented 5 years ago

we've been using YOLOv3.

Great, I have worked on YOLO too. We can consider it. Some research work is needed though. Google scholar may help.

sunn-e commented 5 years ago

I'm interested aswell but quiet busy till the end of next week

No issues, We can start whenever you get time.

KOLANICH commented 5 years ago

It seems to be of comparable complexity to a self-driving car.

sunn-e commented 5 years ago

@KOLANICH We will work in increments.

anshulxyz commented 5 years ago

So, how do we get started?

jsbroks commented 5 years ago

I have background in object detection, and can help out if needed.

sunn-e commented 5 years ago

thanks @FredrikAugust for assigning labels. That should summarize what we should be working on.

sunn-e commented 5 years ago

I have background in object detection, and can help out if needed.

Great. What have you used for object detection?

sunn-e commented 5 years ago

So, how do we get started?

I am creating a GitHub repo for time being. I will send invitation to join the organization to everyone that has contributed to it. I am thinking of creating a slack or gitter to keep up with each other.

sunn-e commented 5 years ago

Please join the chat room. :) https://gitter.im/The-Amba-Project/community?utm_source=share-link&utm_medium=link&utm_campaign=share-link

24hari1998 commented 5 years ago

Hey guys I am also interested in the project , I got some experience in Machine learning but anything i can learn on the fly and do it if needed. I would like to contribute as i am planning to do a similar project for my college as well.

manginav commented 4 years ago

Hey I have experience in backend apis if needed help please add me to the project

sunn-e commented 4 years ago

I looked into this and this really is of level of an autonomous vehicle. I think I underestimated the problem.

wmertens commented 4 years ago

Some blind people have learned to navigate with echo location, they click their tongue and they listen for the echoes. Maybe a simple app would be to generate a continuous series of clicks and allow changing the clicks with your thumb on the screen (volume, frequency)

Wout.

On Mon., Sep. 9, 2019, 1:28 p.m. Sunny Dhoke notifications@github.com wrote:

Reopened #188 https://github.com/open-source-ideas/open-source-ideas/issues/188.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/open-source-ideas/open-source-ideas/issues/188?email_source=notifications&email_token=AAANNFRCE4FBNZK5K42HDCLQIYXL7A5CNFSM4H6YSWFKYY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGOTQIHOXA#event-2618324828, or mute the thread https://github.com/notifications/unsubscribe-auth/AAANNFSOP5YCMHBCVTCVS6TQIYXL7ANCNFSM4H6YSWFA .

sunn-e commented 4 years ago

@wmertens I recently saw a samsung ad where there wwas an app called "good Vibes". That was a good idea too. I wonder is there any open source alternative.

pearl2201 commented 4 years ago

I am interested. Currently working on some projects at work. One of them involves object detection in images, we've been using YOLOv3. I would like to learn about the depth detection, such as how far the person in front of us is, is he in the colliding distance. Or at what speed is he coming towards us.

I don't think we need machine learning for depth detection. You can use a hc sr04 sensor to check if something in front. I think about blind staff (Arduino + hc sr04 sensor+ buzzer), it detect a obstacles or stair and alert by some "beep" sound.

sunn-e commented 4 years ago

@pearl2201 that sounds like a good project

anshulxyz commented 4 years ago

Google Maps now helps visually impaired people cross the street and stay on course

https://www.theverge.com/2019/10/10/20908882/google-maps-better-walking-directions-help-people-visual-impairments

sunn-e commented 4 years ago

https://github.com/aamba

jshankarc commented 4 years ago

Sounds Interesting and moreover, I have already started working on this project and all the comments helped me get a new perspective on to solve the issue. To start with, we can record a video in a park with the track (similar to a self-driving car) with an angle of body rotation (eg: record video with an accelerometer fitted to a cycle). Then use a smartphone camera to view the road and guide the blind with haptic vibration.

sunn-e commented 4 years ago

@wmertens I recently saw a samsung ad where there was an app called "good Vibes". That was a good idea too. I wonder is there any open source alternative.

Can anybody find or build something like this?

Sh1-Zu3 commented 6 months ago

hi sir, i know i came to late, but project is working? Can i use it for my project?