Open seanbudd opened 2 years ago
A reasonable default for this might be considered to be in the range of 0.5-1 seconds. 0.5s is the current delay for repetition of input gestures. 1s is what is proposed as the default delay for delayed phonetic character descriptions.
@seanbudd, I respectfully but strongly disagree with this approach.
IMO, there is no point in commonalizing parameters that have nothing to do with each other. Various delays need to be configured for various usages. Just to take some (not exhaustive) examples:
Here is an example of user story for which a single parameter would lead to conflicting interests: I have dexterity issue and need to set double keypress delay to 2 seconds. But I have no hearing or cognitive issue so I use speech at high rate; thus I need character description to be reported after 0.5 seconds only.
I have dexterity issue and need to set double keypress delay to 2 seconds.
Delayed character descriptions is the same UX case as this. The description is meant to be not spoken if unnecessary - the pause allows the user to decide if they want to hear the description, or continue to the next character. The delay in announcing a character description is for how long a user idles - wants to wait and hear the description (i.e. LATER priority speech, like #12149).
If you have a dexterity issue, and it takes you 2 seconds to press keys, then delayed character descriptions will be announced before you have had a chance to navigate to the next character. This might cause an annoying interruption of speech when navigating character by character.
But I have no hearing or cognitive issue so I use speech at high rate;
I would say that hearing and cognitive issues are not handled by the delay, but rather by the speech rate. We do not delay other speech items by a significant independent delay, so a user with hearing or cognitive issues would have the same issue hearing any speech with NVDA. This case is handled by having a slower speech rate with longer natural pauses. The BreakCommand implementation of delayed character descriptions includes natural pauses.
We could consider "delayed" (BreakCommand/LATER priority speech) vs "immediate" (EndUtteranceCommand) to handle the case where people always want to hear description, and not want to wait at all.
braille flash messages (not listed in this issue) depends on how good you are at reading braille and also maybe on the number of cells of your braille display. Why is the flash message duration not listed here?
This wasn't considered initially out of lack of awareness. There are probably many other use cases for this option that are not listed - commenting them here would be helpful.
I would suggest this is not a good use case, for the reason you described, it depends on how long it takes a user to read a line of braille, not how long it takes for them to idle. I think calculating a Braille users idle time would include using that parameter in replacement of waiting for speech to finish.
If there are any other examples of conflicting needs - i.e. where a user may require to set these parameters individually, it would be great to raise them as well.
I understand your point and acknowledge that some similar factors may help determining the delays for double keypress and character description. But they are not all identical IMO.
Also, if you still think that the same delay should be used for the 2 features, this would have been better to merge #13550 with a 0.5 sec delay in order to align with the existing double keypress delay; this way, alpha testers would immediately test a similar delay. Unless you intend to raise the double keypress delay to 1 second soon of course, what would also be satisfactory regarding the goal to reach a common value.
For double keypress, the delay is defined by the following criteria:
For character description, but also for aria-live="polite"
, windows notifications and object description after properties (#12149), we are talking of a delay after the end of the last speech, in order to separate the additional information about to be spoken from the last speech. This delay depends on the ability of the user to separate two spoken information, and it may also depend on the speech rate. I.e. a user using a high speech may be happy with a 0.25 delay whereas a user with a low speech rate may find that a 0.25 delay still may be confused with a simple comma.
At last, some users may also ask to report immediately the character description when navigating with arrows, either always or in a specific mode such as in Jaws. It's a bit off-topic regarding this issue but it may have an impact when (re)defining the options allowing if and how the character description will be reported.
Also, if you still think that the same delay should be used for the 2 features, this would have been better to merge https://github.com/nvaccess/nvda/pull/13550 with a 0.5 sec delay in order to align with the existing double keypress delay; this way, alpha testers would immediately test a similar delay. Unless you intend to raise the double keypress delay to 1 second soon of course, what would also be satisfactory regarding the goal to reach a common value.
As mentioned earlier:
A reasonable default for this might be considered to be in the range of 0.5-1 seconds. 0.5s is the current delay for repetition of input gestures. 1s is what is proposed as the default delay for delayed phonetic character descriptions.
In order to get #13550 in quickly, the well tested UX default of 1 second is used. Without data from users of that add-on we cannot be certain what popular values are used, or what a sensible default should be instead. Getting the PR across with a safer default value is preferred to delaying the PR further. In the case of #13550, erring on the high range of the delay is safer, as the main path of delayed descriptions is not reading them at all. In the case of input gesture repetition, a smaller delay window is safer, as the main path is executing the same gesture twice, rather than performing "double press" version of the gesture.
This delay depends on the ability of the user to separate two spoken information
I disagree. The requirement to separate spoken information is handled by natural pauses scaling with speech rate. The very definition of aria-live="polite"
is based on user idle time. The same goes for the concept of "polite speech" in general. A natural pause (e.g, EndUtteranceCommand) should still separate speech.
I don't understand the problem with the usecase you've described - i.e. what is the issue with a pause roughly the length of a comma? I've rewritten your usecase to reference dexterity as well. Let me know if I am misunderstanding the user story.
At last, some users may also ask to report immediately the character description when navigating with arrows, either always or in a specific mode such as in Jaws.
I agree that this use case isn't covered here. For reporting character descriptions immediately after the character, this can be done on a line/word/character level basis through the various "spell" commands. If users request the option to always report character descriptions immediately it can always be added as an alternative to "delayed/polite".
@CyrilleB79 - The delay for key repetition does stand out as a unique use-case, compared to the other 4 listed which all refer to polite speech in some form. I can imagine the need to split it out as a separate setting if a required use case is found.
Hello, as a user I would like to express my opinion on the new delayed phonetic character descriptions function.
First, I am very happy with the original Enhanced Phonetic Reading addon's default 1 second delay for descriptions, and would not wish this to be lost. With this value I could check the spelling of a word by going letter by letter, and it would work like this: press right hear a letter, and if understood straight away I could move onto the next without interference from an unnecessary description. If however the letter was not understood, I could then just wait a few moments to hear the description.
With a delay less than 1s, the descriptions might begin to interfere.
For example, if I am checking the spelling of "hello" then with a 1s delay I might hear: h, e, echo, l, l, o. In this example "e" was the only letter that I had difficulty with, and so paused to hear its phonetic description.
But with a delay of less than 1s I am concerned that I might now hear: h, hot, e, echo, l, lim, l, lim, o, osc. as the descriptions hotel, lima, oscar etc. are arriving at a point where I have already heard the letter, processed it in my brain, understood it, and was now in the middle of moving onto the next character.
Whereas more than 1s might feel draggy, especially if I wish to check several characters. Of course other users may have their own sweetspot of when the descriptions are arriving at just the right amount of time.
The second thing that I would like to say is that my own preference Would be for this delay to be set independently from other delays, such as key repetition or hearing the description of a form control.
Basically, I have spent years being used to obtaining certain types of information quickly, and so might want to disable any such new delays by setting their delay to 0.
But then I wouldn't want this to similarly cut down on the character description time delay, or affect keystroke repetition rates etc. Just my thoughts.
@seanbudd to consider people with learning difficulties, additional to your proposal, it would be good to introduce also a history dialog where you can see the last low priority speech events from aria live polite, notifications from other programs and NVDA update requests. This list could be automatically cleaned after restarting NVDA and could cache let's say only the last 20 low speech event or so. When reaching the 20 events limit, the first event will be cleaned and the most recent one will be added. Setting the idle time for these kind of low priority speech events should be adjusted in object presentaion settings.
I am of the same opinion as Cyrille, I think the option for changing the time for delayed key presses and character fonetic descriptions as well as reporting of command keys not being interupted through other events should be handled separately in the keyboard settings.
The braille use case could easily be handled in the braille settings with your proposal.
Splitting this proposal in Object presentation settings, keyboard settings and braille settings would follow the standard settings category and would allow for users to understand better what this setting actually impacts. It would be just more transparent.
I don't see how this setting would positively impact #12149. In tables sometimes users navigate very fast and you expect actually to hear the description right after the cell content. If the cell content is very short, you will press the next key very fast and NVDA would never get idle unless you adjust the timing to a very small value. In any case, for #12149 there should be an option to decide in which order the things need to be reported, because the expected order of reporting things is not the same for every use case.
Is your feature request related to a problem? Please describe.
Users interact with devices at different speeds, especially if they have mobility or cognitive disabilities. There are many features which are blocked by the concept of waiting a user configured delay to determine if Windows/NVDA is idle.
Use-cases:
8908
12464
13550
13509
13967
Describe the solution you'd like
An option in General/Keyboard preferences to determine how long to wait before NVDA is considered idle. Idle: No user input / all normal speech has finished for a (configurable) period of time.
A new speech priority "LATER" to announce speech when NVDA is idle.
Existing speech levels:
Describe alternatives you've considered
Handling this issue on a PR/issue level basis, rather than a generic fix.
Additional context