RAP-group / empathy_intonation_perc

MIT License
0 stars 0 forks source link

R2.3 - recommendations: acoustic description of stim #47

Closed jvcasillas closed 2 years ago

jvcasillas commented 2 years ago

This is also mentioned in the more detailed comments, but I would like to see some sort of description of the tunes (at least the nuclear configurations) for each variety and sentence type. At the very least a very phonetic description would do (e.g. a rise to a high tone in the nuclear stressed vowel, followed by a fall to a low boundary tone). But I think this would make for a better discussion of the findings. This might also help the authors to make sense of the finding about wh-questions and the role of proficiency and empathy for those. Why wh-questions?

Action: acoustic analyses, add descriptions

jvcasillas commented 2 years ago
jvcasillas commented 2 years ago

Pushed to v2 via https://github.com/RAP-group/empathy_intonation_perc/pull/68

jvcasillas commented 2 years ago

I think we need to add some spectrograms to the supplementary materials. Probably an example of each utterance type for each speaker variety (4 x 8 = 32 spectrograms) to go along with the plot we already created. This is easy to do (scripts/praat/6_plot_spectrogram.praat) but we need textgrids. @RobertEspo could you do this? I think you are probably the only person with experience.

Something like this is ideal:

Image

So three tiers, one interval, two point. What do you think? We could just pick one example that is typical of each utterance type for each speaker. In the cases in which there are multiple patterns we can just talk about them in prose.

RobertEspo commented 2 years ago

@jvcasillas If someone can provide the spectrograms, or show me how to run the script (never ran a praat script before, so if someone has the time and our schedules match up, that would be cool), I can make the textgrids. You're not wanting me to fill out the textgrids though, right? I wouldn't trust myself filling out the contours.

jvcasillas commented 2 years ago

Here is a possible workflow (I'm assuming you know how to create textgrids and annotate them).

If this seems like a lot (it's probably a lot), maybe just do one variety to start and report back on how it goes.

RobertEspo commented 2 years ago

That sounds good. I'll try one variety this week and see how it goes. What's the deadline looking like?

The labeling, especially for prenuclear tones, is gonna be really iffy for me, I still just don't have the hang of it. I know there's an automatic labeling tool out there, but I've never used it.

jvcasillas commented 2 years ago

Just do your best for now and I'll go over it. I just need more hands on deck so that we can get done. The deadline is november 11th but I'd like to be done this weekend so that everybody has next week to review/edit.

RobertEspo commented 2 years ago

Sounds good. Send me over the spectrograms and I can get started this week.

jvcasillas commented 2 years ago

All the sounds files are in the repo (data/stimuli/sounds/). You can download the whole thing on the main page and then navigate to that folder. From there, all you have to do is open the .wav file in praat, highlight it, and click view and edit to see the spectrogram. To create the textgrid, highlight the .wav sound object and click "annotate". Have you done this part before? If not, it might be easier for me to just show you real quick.

RobertEspo commented 2 years ago

Oh duh, yes, sorry! Brain fart. I'll let you know when I've done the first batch.

jvcasillas commented 2 years ago

Sounds good, thanks! Remember, you only need to find one good, clean example for each utterance. In other words, dont segment every wav file! 🤣

RobertEspo commented 2 years ago

I've made text grids & filled them out for a few utterances, but I'm not finding the data/stimuli/textgrids. (Granted, I also had trouble finding data/stimuli/sounds/, I found the .wav files in empathy_intonation_perc/exp/empathy_intonation_perc/stim/wavs/.)

Referenced this paper btw: https://www.raco.cat/index.php/EFE/article/download/261351/348579/ @article{henriksen2012transcription, title={Transcription of intonation of jerezano andalusian Spanish}, author={Henriksen, Nicholas C and Amaya, Lorenzo J Garc{\'\i}a}, journal={Estudios de fon{\'e}tica experimental}, pages={109--162}, year={2012} }

jvcasillas commented 2 years ago

@RobertEspo im pretty sure you aren on the wrong branch (my bad, forgot to explain this). You need to switch to jvc-v2-edits and then download to get the most up to date version. The easiest way to to do this is probably to put the textgrids on your desktop separately... delete everythign else, and then redownload the project.

Just use this link, actually. 🤣 https://github.com/RAP-group/empathy_intonation_perc/archive/refs/heads/jvc-v2-edits.zip

Then copy paste the textgrids you saved into the textgrids folder I mentioned before (using that path).

RobertEspo commented 2 years ago

Okay, I think I did it correctly... haven't used github in a hot minute. I pushed the local changes to the origin. I'm not sure if it'll be immediately obvious which textgrids I've done, so the names are:

Let me know how they're looking, it didn't take nearly as long as I expected to actually annotate. Decided to do two y/n questions bc "Ana lleva el abrigo" didn't really line up with what I read in the paper I referenced. The narrow focus statement also sorta confused me -- is Ana or abrigo supposed to be the narrow focus? I'm assuming Ana since that's the contour that "changed" from the declarative (which is cool to see), but it doesn't line up with what I expected from the paper.

jvcasillas commented 2 years ago

Got them. Worked like a charm. "El abrigo" should be focused in the narrow focus statement (¿Qué lleva Ana? was the prompt). This is what the output looks like (if youre interested).

andalusian_match_declarative-broad-focus_Ana-lleva-el-abrigo textgrid

Go ahead and keep going, if you have time. If possible, try to get the same examples from the other varieties (though for whatever reason they might not end up being the best examples.. we'll see). Thank you!

RobertEspo commented 2 years ago

The output looks so clean, love it.

Ah you know what I think the issue is with the narrow focus is that all these papers have more specific pragmatic contexts (narrow focus correction, narrow focus contradiction), but not just narrow focus answering an information seeking wh- question. Weird.

Anyway, I'll be able to do more Thursday at the earliest.

jvcasillas commented 2 years ago

I just noticed the same thing reading through the Henriksen paper. Either our andalusian speaker is weird (which is what reviewer 3 thinks) or the context matters for eliciting focus. I might send the spectrograms to Henricksen and see what he thinks.

RobertEspo commented 2 years ago

@jvcasillas Just pushed the rest of the textgrids to the origin.

Some things:

jvcasillas commented 2 years ago

Sounds good, Robert. Thanks. I'll generate all the spectrograms and take a look, make adjustments as needed, etc. I appreciate the help.

jvcasillas commented 2 years ago

General response to the reviewer on this specific point (included via https://github.com/RAP-group/empathy_intonation_perc/pull/76):

All three reviewers coincide on the need for a description of the acoustic stimuli used in the task. In the revised manuscript we have included this information. Importantly, we do not attempt to ascribe learner difficulty to particular realizations of the four utterance types. The reasoning behind this decision is as follows. It is well attested that there is inherent variability in how a given utterance is realized and multiple strategies can co-exist to convey the same meaning in the same context. We observe this variability both between varieties of Spanish and within individual speakers of these varieties. Importantly, we also observe this variability in our stimuli and this is so by design. In other words, the particular tune used in a specific pragmatic context is not always the same, though there are certain strategies that are more common than others. We believe this is a feature, not a bug. In essence, our project is concerned with the L2 learners' general ability to extract pragmatic meaning from the plethora of tunes available in Spanish, and not with one particular tune most commonly associated with a specific utterance type. In a future study that is already planned (though not yet underway), we intend to limit/control the specific tunes presented to the learners with the intention of determining what makes particular tunes more or less difficult. We believe this will also make a sound contribution to the LILt model the reviewer brought to our attention. In sum, as is, we cannot make definitive determinations regarding what about specific utterance types may make them more or less difficult, or more or less likely to correlate with empathy levels. In the revised manuscript we provide plausible explanations for this and set the groundwork for future research.