Closed zepumph closed 2 years ago
I wonder if instead of instrumenting the UtteranceQueue, we should instrument announcers, so that we can emit the text to the data stream?
I wonder if instead of instrumenting the UtteranceQueue, we should instrument announcers, so that we can emit the text to the data stream?
I felt this way again while working on https://github.com/phetsims/utterance-queue/issues/18, and then again separately while working on https://github.com/phetsims/utterance-queue/issues/13. I think I'll experiment with that.
The announcers seem to be important for knowing what was spoken, like for the data stream, as an output.
The queue may be important for state. if we want to be able to set the queue back to its length and order. Right now I feel like this is not actually needing state support, and is more transitory (like a button highlight. Maybe we won't feel that way soon though.
@jessegreenberg, presumably we need to figure this out for Greenhouse, right? I'll add some priority to it and see if I can work on it.
I'm a little bit ambivalent on this again, because I feel like UtteranceQueue should be the central location for text to go through, even if it is being processed through announceImmediately
. Why do we need to instrument each individual announcer, when we could fulfill our goals by having the same lines use Utterance.getAlertText()
. I would like to discuss this further.
We are going to flip flop again! After talking with @jessegreenberg, we think that Announcer should extend PhetioObject, and we can instrument that way.
For context, the announcer is the best place for this because there is further logic in the announcer that may keep an announcement from being announced.
Next for this issue, I'm going to instrument voicingManager and ariaHerald (by having Announcer extend PhetioObject but be Tandem.OPTIONAL
).
Then we will discuss if we still want UtteranceQueue to be instrumented from there. I don't really see any value being added by the UtteranceQueue. @jessegreenberg and I spoke yesterday about how the queue is really just a tool for the announcer to use, and not the other way around. So it would make sense that the announcers will get instrumented.
I could see this working, but I feel like instead we should perhaps just create and instrument a single Emitter on Announcer
that is instrumented for PhET-io. I'll try that next and see how I like it.
I spoke with @samreid, we will proceed with this patch over the above:
There was one question in the above patch that we didn't get to. @samreid is it alright to instrument the voicingUtteranceQueue as a singleton? This will then be added to every sim, even if it doesn't support voicing. Does that seem alright to you? I think it is alright, but I understand if we should discuss this further.
Not sure if this is any different from the most recent one:
There was one question in the above patch that we didn't get to. @samreid is it alright to instrument the voicingUtteranceQueue as a singleton? This will then be added to every sim, even if it doesn't support voicing. Does that seem alright to you?
Is there a way to instrument it as a singleton, but leave it uninstrumented for sims that don't support voicing? I'm concerned that, while adding a voicingUtteranceQueue
to every sim is in our long term goals, it seems odd to go through an intermediate phase where some sims have a queue that does nothing--may seem broken.
@samreid, I think this warrants a sync discussion together. There is nuance and history that I would most prefer to share together. Would you send me a calendar invite?
@zepumph can you please summarize our meeting?
@zepumph can you please summarize our meeting?
That is a great question! I don't think I can at this point. That said, I just ran into this over in https://github.com/phetsims/utterance-queue/issues/61. I think that it would be good to recognize the purpose of utterance-queue instrumentation was to know what was coming out of the sim from this module. I think that at this point Announcer is much more set up to support that. I recommend removing the UtteranceQueue instrumentation and instead to have Announcer instrumented. Then we can instrument the announcementCompleteEmitter
, and we will get data stream, and the ability to add listeners.
I don't think this will be challenging, and it will unblock this issue.
Ok. I was able to get Announcer.announcementCompleteEmitter
instrumented, which to me feels much better than instrumenting UtteranceQueue.
This has been annoying me for some time, but I'm glad I waited because all of @jessegreenberg's awesome work with refactoring Announcer has made this tremendously nice and easy.
For review:
naturalSelection.global.view.voicingManager.announcementCompleteEmitter
.I asked on slack who may be best to review this, but didn't get a response, so I'll start with @arouinfar for general studio tree structure and data stream.
To test and explore:
voicingManager.announcementCompleteEmitter
, and the same with ariaLiveAnnouncer.announcementCompleteEmitter
.?phetioConsoleLog=colorized&phetioEmitHighFrequencyEvents=false&voicingInitiallyEnabled
(either standalone or studio is fine) and search in the console for the output of announcementCompleteEmitter
. Blocking until it gets reviewed.
@zepumph I reviewed the things you listed in RaP and they seem to be as described. I really don't know what a client would want or expect here. One question I had was whether or not the annoucementCompleteEmitter
be read-only. Seems like these things should only emit based on the sim state, not forcibly by the client.
I don't have the appropriate permissions in this repo so I can't reassign @zepumph.
I marked these as phetioReadOnly:true, very good idea!
I really don't know what a client would want or expect here.
I think the primary goal for me is to have a way to tap into the inputs and outputs of the simulation. We have architecture set up to track all inputs (mouse/touch/keyboard), and many outputs (model changes, view changes, visual changes like via a screenshot). This seemed like a vital output that was not being conveyed. I could think of any number of research questions that revolved around "what did the sim present to the user" in which this would be an important piece of that view.
Since work done in https://github.com/phetsims/utterance-queue/issues/14. I think this deserves its own side issue. I'll update the TODO to point to this issue.