Open palday opened 4 years ago
I have a problem with this numerical summary. Most (perhaps all) of the papers do not report mean and standard error for each electrode. They rather report ERPs in a figure where the y axis is ranging from 3 to -3 micro volts (see https://www.mitpressjournals.org/doi/abs/10.1162/089892900562183 for example). How do I collect the relevant information from each paper? Should I just visually inspect their figure and collect rough estimates of the mean (there are not even CIs)?
Yep, that's fine. If they report statistics anywhere, then you can also use those to get estimates of effect size. (There are formulas for converting e.g. F and t to Cohen's d or eta squared). But just report as much information as possible and if information is missing, then it's missing. That's how it goes. :/ But that will also be a valuable contribution to the literature (and we can include this table in the introduction to our manuscript).
Here is a first attempt of literature review for the bayesian analysis: bayesreview.pdf bayesreview.docx Please let me know if this is okay.
However, while I was reading carefully through these papers I have noticed that all the experiments reporting N5 effects used pre-attentive experimental paradigms in which participants were doing some other task (e.g. watching a movie) in order to avoid any interaction with the P3b component that has been observed for dissonant chords when participants were instructed to pay attention to the music. Moreover, most of the N5 effects reported involved only non-musicians and a claim was made about the fact that this component is not affected by musical expertise.
So, it seems to me that we are proposing something that hasn't been previously tested in the literature (at least not directly). That is, according to previous literature the N5 effect can't be observed when the target note is task-relevant if the subjects are non-musicians, however, we propose that the N5 effect can actually be observed when the target note is task-relevant if the subjects are musicians.
Yep, looks good. Can you also provide the time windows and electrodes, where possible for the non significant comparisons? You should be able to edit this comment.
I've used pandoc to convert the docx to Markdown (this only works for relatively simply Word files).
paper | N | expertise | contrast | twin | test | effect size | electrodes |
---|---|---|---|---|---|---|---|
Koelsch et al. (2000) Exp. 3 | 18 | non-musicians | in-key > deviant chords(task-relevant 5th position) | 550-610ms | 2-factors ANOVA(chord type x lateralization) | eta-squared = 0.196 | frontal (Fz,F7,F3,F4,F8,FT7,FT8) |
Koelsch et al. (2000) Exp. 1 | 18 | non-musicians | in-key > deviant chords(task-irrelevant 5th position) | 550-610ms | 2-factors ANOVA(chord type x lateralization) | eta-squared = 0.530 | frontal (Fz,F7,F3,F4,F8,FT7,FT8) |
Koelsch, Schroger, Gunther, 2002, Exp1 | 18 | non-musicians | in-key > deviant chords(task-irrelevant 5th position) | 400-600ms | 4-factors ANOVA(chord type x position x hemi x ant-post) | eta-squared = 0.541 | frontal (Fz,F7,F3,F4,F8,FC3,FC4,C3,Cz,C4) |
Koelsch, Schroger, Gunther, 2002, Exp2 | 18 | non-musicians | in-key > deviant chords(task-relevant 5th position) | 400-600ms | 4-factors ANOVA(chord type x position x hemi x ant-post) | n.s. | frontal (Fz,F7,F3,F4,F8,FC3,FC4,C3,Cz,C4) |
Steinbeis & Koelsch (2007) | 26 | non-musicians | in-key > deviant chords(task-irrelevant 5th position) | 600-800ms | 3-factors ANOVA(chord type x hemi x ant-post) | eta-squared = 0.266 | frontal (F5,F3,F4,F6,FC3,FC4,FC5,FC6) |
Poulin-Charronnat, Bigand, Koelsch (2006) | 19vs.21 | non-musiciansvs. musicians | in-key > deviant chords(task-irrelevant 5th position) | 500-700 ms | 5-factors ANOVA(expertise x hemi x context x harmonic function x antpost) | n.s. vs.eta-squared = 0.146 | frontal (F5,F3,F4,F6,FC3,FC4,FC5,FC6) |
Jentschke, Koelsch (2009) | 20vs.21 | untrainedvs. trainedchildren | in-key > deviant chords(5th position) | 400-800ms | 4-factors ANOVA(regularity × group × hemi × attention) | eta-squared = 0.004 | frontal (F3,F4,F7,F8,FC3,FC4) |
paper | N | expertise | contrast | twin | test | effect size | electrodes |
---|---|---|---|---|---|---|---|
Janata (1995) | 23 | musicians | dissonant > minor > tonic resolution(5th position) | 360-680 ms | repeated measures ANOVAmain effect of resolution | eta-squared amplitude = 0.213eta-squared latency = 0.216 | central-posterior(C3,C4,Cz,T5,P3,Pz) |
eta-squared = (F * df effect) / (F * df effect + df error)
Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Frontiers in psychology, 4, 863.
Unfortunately, Koelsch, Schroger, Gunther, 2002, Exp2 did not report the results of the statistical test when deviant tones where task-relevant. In the paper, they say: "no N5 is visible in the ERPs of Neapolitan chords at the fifth position, most presumably due to the overlap of a large P3 component" however "At the third position, [...], an N5 is still slightly visible at frontal sites, but not statistically significant".
Anyway, I guess they were using the same electrodes and time-window that they used in Experiment 1. I will change the table accordingly.
Btw following up my previous comment, please note that in the contrast column I have specified whether the dissonance was either task relevant or irrelevant in the experiment. Most of the studies reporting an N5 used a task-irrelevant paradigm (e.g. detecting deviant timber while watching a movie) and obtained a bigger effect size because they avoided any overlap with the P3 component. In contrast, deviant tones were task relevant in our task because we explicitly ask to rate how much did the last tone fit in the previous context.
I don't know if there is any way to take that into account in the bayesian analysis. But, at least from a theoretical point of view, that aspect seems important to me.
The possibility of overlap with the P3 was the motivation for using the P3 as a covariate in the N5 model. In language (cf. Alday & Kretzschmar 2019), we can separate overlapping N4 and P3, but the (implicit) task there is usually opposite to music: detect when the expected instead of unexpected completion occurs. We might be able to distinguish overlapping components using something like GLM-based deconvolution ("rERP" in the terminology of Smith and Kutas, 2015, but I think the best modern take on it is Ehinger's work on the unfold
toolbox).
I need to re-read the relevant papers to see if that's too much to ask. :/
But, but, all that said, the N5 has a much more frontal topography than the P3b, so component overlap shouldn't actually destroy the effects. Now, for a P3a, the story might be different....
There are some results that are difficult to interpret on their own. Instead of talking through how they (don't) fit the literature, we could integrate them mathematically into the analysis as Bayesian priors. This would also allow us to formulate our results and their power in Kruschke-esque precision instead of traditional Type I and II error. This also sidesteps the difficulties of design analysis for Type M and S error in a hierarchical framework.
If we want to do this in
brms
for convenience, then #3 becomes moot. But we could do this in Turing or Soss.@francescomantegna We need a numerical summary of effects from the literature to do this correctly, something like
Without looking at the existing results (which I "fortunately" can't recall offhand at the moment), I'm going to declare the ROPE to 0.1µV.