w201rdada / portfolio-pdurkin84

portfolio-pdurkin84 created by GitHub Classroom
0 stars 0 forks source link

Matt's Feedback on Big Idea #2 #6

Closed thielen24 closed 6 years ago

thielen24 commented 6 years ago

Here's what I understand to be your idea in a nutshell: train an algorithm with an individual's voice, associated vocalizing indicators (i.e. muscle movements, etc.), and vocabulary usage after the individual has been diagnosed with a degenerative neurological disease, but before irreversible symptoms take hold. Then use this training data to assist the individual in speech as their disease progresses.

I feel like this idea is ripe to be implemented. As you mentioned, the technology and techniques already exist. Commercial Off the Shelf products such as contact microphones and video cameras could be conceivably integrated to achieve the solution you're looking for. Phones are already processing voice via machine learning locally, so the portable energy needed for adequate long-lasting performance shouldn't be unsolvable, especially since most sufferers would be confined to a wheelchair when finally needing your product (batteries/processing hardware could be integrated into wheelchairs).

You lose me at the conclusion of your "middle." How would forgotten words be assisted? I could imagine a recommendation engine based on the training data being able to usefully predict the next words (mobile keyboards are already pretty good at this), but if the user can't begin to vocalize the word because they can't remember it, how would the algorithm confirm the prediction? I'm not an expert on the target diseases, but I think an augmented reality (think Google Glass) type solution with eye tracking could offer a seamless solution until BCI becomes effective.

Overall, I really like your proposal. It could have a hugely positive effect by simply integrating current technology. Furthermore, the target market is clearly defined and the product would likely be at least partially covered by health insurance. Nice job!

pdurkin84 commented 6 years ago

Thanks for such a positive response Matt. I will look at the section you mention and make it more apparent.

As you pointed out it may be tricky to predict words that the user cannot vocalize or even remember. I hope that since the machine learning algorithm will have data specific to this person that it might be able to (possibly within the context of the current conversation) be able to pick out words the user would have used previously. You are correct though; it may not be possible since if the person has forgotten the words how can they confirm if the predicted word is right (since they no longer know it). I was thinking along the lines of SwiftKey that is used for Stephen Hawking, it knows his vocabulary and can predict his usual words to speed up his speech.

I really like all your other analysis. I have been thinking about the same things, running all the data through the phone, some adapted off the shelf technologies (to make them unobtrusive) and even wearable computers. Wearable computers might have the added benefit (in the case of PD) of forcing the user to exercise more while they are moving around since exercise has significant benefits to sufferers.