The voice says how to move your fingers to press a key. Currently it is just a hash--each key must be written out. I'd like for the keyboard model to have a method that can determine which finger is used to press each key (and they same algorithm can be applied to any keyboard layout, so it's based on the position of physical keys and not letters). Then the voice prompt can call this method to find out the hand, finger, and reach direction required to press a letter. So if the user is hesitating on the letter H, the voice calls the keyboards method and the keyboard model knows that on Dvorak, the H key is right hand, index finger, and no reach. The I key is left hand, index finger, left reach. The return key is right hand, little finger, right reach. The voice can then use that data to fill in the template.
The voice says how to move your fingers to press a key. Currently it is just a hash--each key must be written out. I'd like for the keyboard model to have a method that can determine which finger is used to press each key (and they same algorithm can be applied to any keyboard layout, so it's based on the position of physical keys and not letters). Then the voice prompt can call this method to find out the hand, finger, and reach direction required to press a letter. So if the user is hesitating on the letter H, the voice calls the keyboards method and the keyboard model knows that on Dvorak, the H key is right hand, index finger, and no reach. The I key is left hand, index finger, left reach. The return key is right hand, little finger, right reach. The voice can then use that data to fill in the template.