Keystone-Technologies / keystone-technologies.github.io

1 stars 0 forks source link

Interactive Sign Language Learning Tool #49

Open CalebAlbers opened 8 years ago

CalebAlbers commented 8 years ago

Problem

Learning sign language can be difficult. Sure, there are lots of resources out there, be it YouTube videos, books, classes, et cetera. Most all of these (personal instruction excluded) are one-way streets. They demonstrate 2-dimensionally the finger positions and movement you need. That's great, but it lacks feedback. There is an element of interactivity that is elementally missing.

Proposed Solution

By utilizing 3D printable hands (most notably - the Parloma Hand) along with hand-sensing 3D vision systems (in my initial suggestion, a LeapMotion controller), one could both demonstrate visually a sign and track a learner's attempt at reproducing that sign.

By incorporating this method of demonstration and hand tracking, a feedback system could be instituted. First, a learner is displayed a simple sign and written/auditory equivalent. Then, the learner makes an attempt. Based on that, the system could give an accuracy reading. Depending on the reproduction accuracy, the demonstration hand could either repeat the movement normally, or potentially with an exaggeration of the movements that the learner needs to hone in on.

This system of continuous feedback could foster a very effective learning system. With just a hand and wrist, only basic signs could be used. However, getting the basics down and perfected is a major stepping stone in learning a new language (be it visual, written, or spoken).

Given the affordability of 3D printing, an entire solution could be built for around $200 (material cost - software development not factored in). This affordability would make it appeal to those hoping to learn sign language, as well as in clinical settings.

Potential Problems

Creating a learning system is hard! Software like RosettaStone has been developed for years and still costs hundreds, if not thousands of dollars. Research around interactive learning is prevalent, however taking that research in suggested teaching methods and actually implementing it into a practical and easy-to-use system is time consuming to say the least. That being said, a proof of concept could potentially be made quickly by utilizing projects already in development (such as the Parloma hand and libraries people have built for ASL tracking via the LeapMotion).

3D printed hands are great, there is no doubt. That being said, the cost of cheap rapid development-quality is much less than the cost of something with the durability required for clinical or consumer settings. I can print a hand on my printer in PLA in a matter of hours (I've printed about 5 of them so far - go ahead, ask me anything :) - however PLA is not strong enough to be a viable product. Something like Nylon or other high-impact materials would be preferred. On top of that, higher quality printing and/or injection molding would need to be used to get something with a production-ready finish. Both of these raise the price.

On top of this, the LeapMotion is a great place to start, however it lacks some of the abilities that higher-cost alternatives offer. Namely, it is very sensitive to ambient IR light. Hand and finger tracking also doesn't always have the highest confidence (rating of how sure it is that the learner's hand truly is making the movement the LeapMotion is picking up).

Suggested Reading

Translating sign language in real time A Cognitive Approach to Language Learning

s1037989 commented 8 years ago

If I understand your problem statement correctly, it's not that 3D images or even a physical object will itself help with learning the language, it's that the computer can see how you're doing, even converse with you.

Perhaps to put it simply, we can vocally dictate to our computer, we can interactively study vocal languages because we speak into a mic that a computer can process and check for errors or correctness.

But this doesn't exist [currently/readily] with sign language. So it seems that problem is probably greater than just helping to learn the language, but it's all comparisons to vocal languages. ASL is a language like any other and is currently one not readily available to the computer. It's unidirectional. Theoretically, Google Translate could translate text to ASL (perhaps they do already) through computer-generated animated gifs or such. But you couldn't translate ASL to English because there isn't that 3D ASL capability.

Very interesting stuff! Killer idea!!

I'm extremely intrigued by your proposal. I wonder if Eric would be interested in this for our client / his institution Center for Hearing and Speech -- not sure if this would have any impact on their mission. Regardless, what an interesting idea.

Ok, so the hardware of it sounds "simple". What about the software? The software sounds insanely complicated. I'd be interested in seeing a pseudo-code demonstration just to help communicate the concept of how this work. Even pseudo code may be too complicated...

The beauty of psuedo code, of course, is that it's like a cartoon. A cartoon can do anything you dream. So can your pseudo code. Imagine any function or routine it works, give it an excellent name and the reader must assume that the function exists and works flawlessly. Be prepared, of course, for a request for insight into a given routine.

This is valid, but insufficient pseudo code:

void main () {
  while ( var sl = get_signlanguage() ) {
    respond(sl);
  }
}

You'd want to explain those functions, just a tad... :)

Anyway, I digress. Awesome stuff, Caleb. With the potential for a connection to Center for Hearing and Speech, we might be able to pass this through KIL and get some resources allocated!!

ehumes commented 8 years ago

Fascinating idea! On Feb 15, 2016 1:00 PM, "Stefan Adams" notifications@github.com wrote:

If I understand your problem statement correctly, it's not that 3D images or even a physical object will itself help with learning the language, it's that the computer can see how you're doing, even converse with you.

Perhaps to put it simply, we can vocally dictate to our computer, we can interactively study vocal languages because we speak into a mic that a computer can process and check for errors or correctness.

But this doesn't exist [currently/readily] with sign language. So it seems that problem is probably greater than just helping to learn the language, but it's all comparisons to vocal languages. ASL is a language like any other and is currently one not readily available to the computer. It's unidirectional. Theoretically, Google Translate could translate text to ASL (perhaps they do already) through computer-generated animated gifs or such. But you couldn't translate ASL to English because there isn't that 3D ASL capability.

Very interesting stuff! Killer idea!!

I'm extremely intrigued by your proposal. I wonder if Eric would be interested in this for our client / his institution Center for Hearing and Speech -- not sure if this would have any impact on their mission. Regardless, what an interesting idea.

Ok, so the hardware of it sounds "simple". What about the software? The software sounds insanely complicated. I'd be interested in seeing a pseudo-code demonstration just to help communicate the concept of how this work. Even pseudo code may be too complicated...

The beauty of psuedo code, of course, is that it's like a cartoon. A cartoon can do anything you dream. So can your pseudo code. Imagine any function or routine it works, give it an excellent name and the reader must assume that the function exists and works flawlessly. Be prepared, of course, for a request for insight into a given routine.

This is valid, but insufficient pseudo code:

void main () { while ( var sl = get_signlanguage() ) { respond(sl); } }

You'd want to explain those functions, just a tad... :)

Anyway, I digress. Awesome stuff, Caleb. With the potential for a connection to Center for Hearing and Speech, we might be able to pass this through KIL and get some resources allocated!!

— Reply to this email directly or view it on GitHub https://github.com/KeystoneIT/keystoneit.github.io/issues/49#issuecomment-184347240 .

lizmayfield13 commented 8 years ago

Going off the thought of LeapMotion, it really seems like a way to turn something like Dragon into a usable option for ASL.