Closed JackMostow closed 6 years ago
Not sure I understand correctly how these new modes are supposed to work, it's useful for me to think in terms of difference vs existing modes, please confirm:
HIDE is like HEAR, it just doesn't show the sentence nor marks spoken words.
REVEAL is like ECHO, except that it doesn't show the whole sentence, it only shows one word at a time, before the kid says it.
PARROT is like ECHO but reads sentence before kid instead of after kid, that's already confirmed above.
Thanks for checking!
HIDE: yes.
REVEAL: no. It shows nothing at first, then adds (i.e. reveals) each word after the kid says it or taps.
PARROT: yes. Why did you add "that's already confirmed above"? I don't understand its intent.
The comment on PARROTwas just pointing to the (like ECHO but reads sentence before kid instead of after kid)
remark in your original post, so that was already confirmed, I just wanted to have them all in one place.
Ah! Thanks. – Jack
Jack - can we use any of these for listening comprehension? HIDE will hide the sentence yes... but how do we listen for the answer?
I wish! Eventually I want to embed comprehension questions in stories as depicted in our June pitch to XPRIZE. HIDE could embed generic comprehension questions (e.g. “Who is this story about?”) if we had a way to wait for an answer before proceeding. We could embed a comprehension question at the end of the story, but it would be immediately followed by “What did you think of that story?” in the rating menu that you love so much ;-). – Jack
Actually, I just thought of a simple way to insert a comprehension question and pause for a fixed duration anywhere in a HIDE story. Just insert the question in the story text, followed by a pseudo-word PAUSE_FOR_ANSWER whose recorded “narration” is a [5] second silence.
We already have narrations of the generic questions in the QUESTIONS tab of Swahili translations, e.g. “Who does this talk about?” and “Where does this take place?” We’d just have to insert them (minus the associated multiple choices) in a copy of the story text followed by PAUSE_FOR_ANSWER, and copy the narration into audio assets folder for the story to make sure RoboTutor can find it.
I’d love to kid-test such questions next week to include in code drop 1. Who can do it? – Jack
I added this to the code-drop 2 stack. Judith confirmed that her undergrads are maxed out for code-drop 1.
-Sarah
Judith - RoboTutor doesn't need to listen for answers to generic questions because it wouldn't understand them anyway. Their purpose is to assist comprehension, not assess it. So a 5-second pause would be ok, other than interrupting kids who speak longer.
This took longer than expected but I think it's fine now. Prompts need some more work. For PARROT I added a prompt when switching from reading to listening to the student, it seemed weird to just wait without saying anything. Now it's just the usual 'Please read aloud', maybe not the best. Also, in REVEAL it doesn't make much sense to say 'Please read aloud' as they won't see anything.
Octav – Thanks for pointing it out! “Repeat after me” might suffice for PARROT, but REVEAL requires a text-specific prompt provided in the data source. Are any of these options feasible and worthwhile for code drop 1? If not, let’s choose the best for code drop 2:
a. Story-specific: Specify an initial prompt at the start of storydata.json.
+: easiest to implement
-: narrow solution
b. Hierarchically nested: Allow, e.g., “mode”:”REVEAL” at the start of any unit – story, page, sentence, or (if feasible) utterance. It would apply to the entire unit but could be overridden at subunits.
+: modular – could insert a moded unit into another story without affecting the rest of it
c. Variable assignment: Allow, e.g. “set_mode”:”REVEAL” in storydata.json anywhere that’s easy to parse, e.g. probably not in mid-utterance. The new mode would remain in force until or unless changed by a subsequent set_mode.
+: easier to implement?
d. Heterogeneous data sources: Allow a data source to specify a sequence of different activity types, like the scripting language in Project LISTEN’s Reading Tutor that specified an activity as a sequence of a few types of steps (e.g. oral reading, menu selection, writing, free-form spoken input), specified for each prompt specified whether or not it was displayed and whether or not it was spoken, and allowed steps and prompts to include variables bound by assignment statements or database queries
+: more powerful – e.g. allow insertion of BubblePop comprehension questions within a story
-: hardest to implement
e. Split screen: Allow different activities on different parts of the screen, not necessarily active simultaneously, rather than requiring each activity to occupy the entire screen. This capability proved necessary in Project LISTEN’s Reading Tutor in order to display story text in the top half of the screen, e.g. paragraph ending with a cloze question (“Elephant went down to the ____”), and a multiple choice menu of candidate completions in the bottom half (“school” “river” “store” “going”).
+: most powerful, and provides the most flexibility, e.g. allow written input in mid-story
-: hardest to implement if it requires generalizing every activity type to fit in a portion of the screen Thanks. – Jack
For REVEAL mode - the picture will tell the kid what they are supposed to say .... they will see the number, word, syllable as the picture of the story.
Octav, can you please investigate a way to increase he font size on stories? There is way too much white space and the font size is too small for beginning readers.
Judith and Octav – The animator graph for story activitieshttps://github.com/RoboTutorLLC/RoboTutor/blob/master/app/src/main/assets/tutors/story_reading/animator_graph.json specifies the INTRO prompt for each mode:
{"type": "AUDIO", "command": "PLAY", "soundsource": "Please read aloud.mp3", "mode":"flow","features": "FTR_USER_READ"},
{"type": "AUDIO", "command": "PLAY", "soundsource": "Please read aloud.mp3", "mode":"flow","features": "FTR_USER_ECHO"},
{"type": "AUDIO", "command": "PLAY", "soundsource": "Now lets listen to a story.mp3", "mode":"flow","features": "FTR_USER_HEAR"}
]
Do you want to use REVEAL in code drop 1? If so, I see few short-term options simpler than option a. Octav – Do you see any better options?
+: no work
-: confusing: relies on kid to figure out what to do.
+: easy – add a narration and FTR_USER_REVEAL case for it to the animator graph and Java component
?: What if any single prompt would apply to all REVEAL activities?
+: appropriate prompt
+: as easy as option 2
-: restricts the set of REVEAL activities
+: appropriate prompt
-: need to add a feature for each prompt
Can there be a
"What number/letter/word is this?" in REVEAL mode?
The words/numeral will be shown in the picture
Judith – Yes there can. Swahili translationshttps://docs.google.com/spreadsheets/d/11feAhhQqrpJC2waSpOkReMiG3SwmNOofrkir3JljmVM/edit#gid=1303048862 lists several such prompts already narrated (indicated by boldface) in various tabs (see below). However, compared to regular READ or ECHO, what if any value does REVEAL add when the task is to read a character or word already displayed in an illustration? The purpose of REVEAL is to present tasks that require saying a character or word before it appears, in particular:
a. Say the number of dots: see OOO, say “3”; then RoboTutor echoes “3”.
b. Say the value of an arithmetic expression: see 2+2, say “4”; then RoboTutor echoes “4”. <-- a way to provide arithmetic practice with spoken responses and feedback For this category of task, prompt once for each illustration, i.e. at the start of each page, e.g. “How many things are here?” This prompt might have to be added to both PLAYINTRO and PAGEFLIP.
a. Say the next item
i. after A, say “B”; tutor echoes “B”.
ii. after 1, say “2”; tutor echoes “2”.
b. Say the next few items
i. After A B C, say “D E F”; each item appears as RoboTutor hears it; then RoboTutor echoes “D E F” (singing if it’s an alphabet song).
ii. After 1 2 3, say “4 5 6”; each item appears as RoboTutor hears it; then RoboTutor echoes “4 5 6” (singing if it’s a counting song).
c. Say the entire sequence
i. After the prompt “Recite the alphabet from A to Z”, say “A B C D E F G H I J K L M N O P R S T U V W Y Z”; then RoboTutor echoes the entire sequence.
ii. After the prompt “Count from 1 to 10”, say “1 2 3 4 5 6 7 8 9 10”; then RoboTutor echoes the entire sequence.
iii. After the prompt “Count down from 10 to 0”, say “10 9 8 7 6 5 4 3 2 1 0”; then RoboTutor echoes the entire sequence.
iv. After the prompt “Count from 5 to 50 by 5’s”, say “5 10 15 20 25 30 35 40 45 50”; then RoboTutor echoes the entire sequence.
For this category of task, prompt once at the start of the story, in PLAYINTRO. Which task(s) should code drop 1 include? Thanks. – Jack
The WORD READING tab includes: Now you will practice speaking some numbers.
Sasa utafanya mazoezi ya kusema nambari.
When you see a number say it aloud.
Ukiona nambari sema kwa sauti.
Please say this number aloud.
Tafadhali sema nambari hii kwa sauti.
What is this number?
Hii ni nambari gani?
Now you will practice reading some words.
Sasa utafanya mazoezi ya kusema neno
When you see a word say it aloud.
Ukiona neno sema kwa sauti.
Please say this word aloud.
Tafadhali sema neno hili kwa sauti.
What is this word?
Hili ni neno gani?
This word is ___
Neno hili ni
Now you will practice reading some letters.
Sasa utafanya mazoezi ya kusema herufi
When you see a letter say it aloud.
Ukiona herufi sema kwa sauti.
Please say this letter aloud.
Tafadhali sema herufi hii kwa sauti.
What is this letter?
Hii ni herufi gani?
This letter is ___
Herufi hii ni
When you see a number read it aloud.
Ukiona nambari soma kwa sauti.
Please read this number aloud.
Tafadhali soma nambari hii kwa sauti.
When you see a word read it aloud.
Ukiona neno soma kwa sauti.
Please read this word aloud.
Tafadhali soma neno hili kwa sauti.
When you see a letter read it aloud.
Ukiona herufi soma kwa sauti.
Please read this letter aloud.
Tafadhali soma herufi hii kwa sauti.
The ARITHMETIC tab includes: How many things are here?
Ni vitu vingapi viko hapa?
How many things are here?
Ni vitu vingapi vipo hapa?
How many things?
Vitu vingapi?
How many is
Ni ngapi?
The COUNTING tab includes: Let's count.
Hebu tuhesabu.
Please count out loud with me.
Tafadhali hesabu kwa sauti pamoja na mimi.
Let's count all the way from 0 to 20!
Hebu tuhesabu kutoka 0 hadi 20!
Let's count up to _.
Hebu tuhesabu hadi _.
Let's count down to _.
Hebu tuhesabu kuenda chini hadi _.
Counting by 10's to 100
Kuhesabu kwa 10 hadi 100
Counting by 5's to 50
Kuhesabu kwa 5 hadi 50
Counting by 100's to 1000
Kuhesabu kwa 100 hadi 1000
and count them one by one
Na uhesabu moja moja
Do you want to use REVEAL in code drop 1? If so, I see few short-term options simpler than option a. Octav – Do you see any better options?
Status quo: No prompt for REVEAL mode.
+: no work
-: confusing: relies on kid to figure out what to do.
Single prompt for REVEAL mode.
+: easy – add a narration and FTR_USER_REVEAL case for it to the animator graph and Java component
?: What if any single prompt would apply to all REVEAL activities?
Use REVEAL mode for just one category of activities that can share the same prompt, e.g. “Say how many.”
+: appropriate prompt
+: as easy as option 2
-: restricts the set of REVEAL activities
Define activity-specific prompts in the animator graph, e.g. “How many?” [in the picture] or “Please count out loud from 1 to 10” or “Please recite the alphabet”.
+: appropriate prompt
-: need to add a feature for each prompt
Any other ideas? What do you recommend?
Unless I'm misunderstanding something, 2 & 3 above are the same, and I've already implemented it as part of this issue. It just needs audio files for whatever prompt we choose. And it also needs audio files for the 'Repeat after me' prompt you were suggesting for the PARROT mode.
4 doesn't seem difficult to implement, we could have a 'prompt' attribute in the storydata.json file.
Something probably easier but less flexible would be to just publish the 'skill' attribute from dev_data.json as a feature, I assume it's available to the tutor somehow, and have 4 lines in the graph choose a prompt based on that. But this would only allow it to choose among 4 prompts, definitely not enough for the list of activities in your previous message.
Doesn't sound like a hierarchical system of prompts would be very difficult either, (not different modes as you were suggesting before, just prompts), if you think that'd be useful. Not completely sure about this, but it might be worth exploring after we have 4 implemented.
PS If you guys are replying to GitHub issues in email, please don't include the automatic previous email quotations in the message, as they make it very difficult to then read the thread in GitHub. Thanks.
2&3 are the same implementationally -- the distinction is just their scope of applicability. A "prompt" attribute in storydata.json would be cleaner than cluttering the animator graph, even though it would be duplicated in every storydata.json output by the same story data source generator. What package should its audio be in? It's tempting to put it in the same folder as the story audio, but would require replicating the prompt in every story folder.
The two prompts we have now are in RTAsset_Publisher/RTAsset_Audio_EN_Set1/assets/audio/en/cmu/xprize/story_reading/ and the corresponding SW folder. Seems like a good place if the prompts would be reused across tutors.
Good idea. BTW, these comments make so much more sense in the GitHub issue than in the email! ;-)
I've implemented the 'prompt' feature. I haven't put it just in one place at the start of the storydata.json file because the current structure doesn't have any data at that level (except for author). All data is in a 'data' attribute, which is an array with one object per page. So the prompt now is one per page.
One question is what should happen if there's no prompt specified. Right now it's just not saying anything. Maybe it's ok, but we could also make it use a default prompt.
Looks like the previous implementation wasn't doing what was expected above. I have a new one that hopefully comes closer, although I suspect is still not what is desired.
Now there are two levels where prompts can be specified: a tutor level and a page level.
The prompt at the tutor level is used at the start of the tutor for all modes. If absent it defaults to 'Please read aloud' for READ, ECHO, & REVEAL modes, and to 'Now lets listen to a story' for HEAR, HIDE, & PARROT modes.
The prompt at the page level is only used in PARROT mode for the prompt needed between the reading and listening phases. It defaults to 'Repeat after me' (but we need the files).
More in a separate comment.
Comments above suggest we want to also have prompts at the start of each page, at least for REVEAL mode. Not hard to implement, but behavior is still not completely clear. A few questions:
I realized from attempting to process your comments that it's time for me to go home ;-). For example, I'm not sure what you mean about generalizing across all modes.
But in general I much prefer specifying prompts in data sources than having to wade through component code, let alone modify it. Tracing the path from the animator graph audio output for arithmetic expressions all the way back through the Java component code upstream to the data source, with a change of variable name every step of the way like a change of disguise in an attempt to escape undetected, was quite an ordeal.
The only drawback is the current inability to specify in the data source which sound package to use.
BTW, an even more flexible mechanism would allow an arbitrary-length list of audios to play at runtime, to spare us from having to preconcatenate them off-line.
A few clarifications:
Octav - Sounds great! What do you mean by the "listening phase in PARROT mode"? The page or story prompt for PARROT should be something like "Repeat after me" before RoboTutor speaks the text to repeat. Were you thinking RoboTutor should speak the text and then prompt "Now YOU say it"?
If we're in code drop 2 territory now, there's a tradeoff between the convenience of specifying mode-specific but story-independent prompts in the animator graph versus the flexibility of mixing different modes and prompts in storydata.json, which we used in Project LISTEN's Reading Tutor to script lots of different activities. In fact we used sequences of mode-specific prompts across different step types (analogous to different activity types in RoboTutor) to indicate whether to show text, whether to speak it, and whether to listen to the kid read it. I expect to need that flexibility in RoboTutor, as well as some (likely limited) ability to mix different step types (e.g. READ, WRITE, COUNT, even BubblePop) on the same screen.
Right so that prompt that you're saying should be 'Repeat after me' can be customized by specifying a prompt attribute at the page level. And it defaults to 'Repeat after me' if no page prompt is specified.
So even if we don't want to customize it, I need the mp3 files that say 'Repeat after me' for English & Swahili to add them to assets.
I added the 'Repeat after me' mp3 files to the asset repository.
I'm closing this issue because the code drop 1 functionality is implemented. I'll start a new, more ambitious issue for code drop 2.
Add modes (per story for now, per sentence eventually) for when to display and read sentence:
Currently:
HEAR shows sentence throughout, marks each word as it says it
ECHO shows sentence throughout, credits each word as kid says it or taps it, then reads sentence aloud, marking each word as it says it
READ shows sentence throughout, credits each word as kid says it or taps
Add:
HIDE hides sentence throughout, reads it aloud
REVEAL hides sentence at first, shows each word as kid says it or taps (e.g. for oral counting, number speaking, reciting alphabet), then reads sentence aloud, marking each word as it says it
PARROT shows sentence throughout, reads it first, marks each word as it says it, and then credits words as kid says it or taps (like ECHO but reads sentence before kid instead of after kid)
Example applications:
HIDE for listening comprehension questions with just pictures
PARROT to scaffold (short) sequence speaking
REVEAL for Number Identification: count up and down, speak numbers, and recite number sequences
REVEAL for spoken Missing Number activity: say the missing number