Closed larsoner closed 8 years ago
@Eric89GXL I'm wondering whether the benefits are in terms of performance or in terms of sensitivity or both. I'm actually using our clustering permutation tests with factorial design (repeated measures ANOVA) and performance is certainly not an issue. Would be good to learn, if something is wrong with how we do it or can be improved. I'll take a look at the paper.
@Eric89GXL that was an interesting read.
The pitch is, basically, to have an exact permutation test for within-subject designs as a replacement for a classical F-test as used in a repeated measures ANOVA. The trick is to take advantage of Gill's algorithm (which uses a Heaveside function and Fourier expansion) and orthogonal contrasts to reduce the multi-cell comparison to a two-cell comparison Gill's algorithm was designed for. The authors demonstrate that for a pure repeated measures design the exact significance test is equivalent of an F-test which is not the case one a between-subject contrast is included.
We could use this as an alternative to our repeated measures ANOVA f-test procedure. However, besides the beauty of exact tests, I would not see a clear benefit for our stats functionality. Unless the statistic produced by such a test has properties that are helpful for spatio-temporal clustering. Importantly, this about the stat_fun, not the clustering-permutation stats we do. It however reminded me that I wanted to extend our repeated measures ANOVA to deal with more factors and get rid of the MATLAB translation remainders and compute the orthagonal contrasts in a more general and pythonic way. Finally, it answered some of the questions we once had about relating ANOVA and t-tests. A t-contrast equivalent of the interaction term would then be:
A1A1A2A2
B1B2B1B2
1, -1, -1, 1
Any thoughts?
I had assumed Gill's algorithm did something to make it so the number of permutations necessary was smaller -- if that's true then it could be used to make our permutation code more efficient.
@Eric89GXL Gill's algorithm speeds up classical two-cell permutation stats, e.g. drug-dose '200mg VS 400 mg'. Would that be applicable to our cluster-permutations?
Should be. If it basically makes it so you can get an exact test for, say 16 subjects without having to do all 2**15
permutations, then yes it would speed things up.
@Eric89GXL there is some fortran code linked in the paper. Unfortunately the link is broken.
I'll see if I can make sense of the algorithm from the original paper when I get some free time :)
Never mind, can't use Gill's algorithm. It depends on the statistical function being linear (e.g., t-test). Although our initial test is linear, we do clustering on top of that, which is decidedly nonlinear.
Hi, just had to sign up when looking at this thread!
There are several good papers for how to do permutation tests for factorial designs... for example
Essentially there are only exact tests when looking at one factor. When looking at interactions, there is no exact test because this would involve keeping main effects constant (not permuting them), leaving of course no permutations for the interaction either!
However, permuting residuals (as opposed to raw data) in a general linear model is an effective way to test larger factorial models. I recently published some work using TFCE and permutation on a 3x2 mixed design ANOVA while permuting raw data which worked out very well.
I have these scripts in matlab but I'm happy to share and try and adapt these for python!
@Mensen it would be great to expand our statistics capabilities. We have some ANOVA code currently, but we're held back by the fact that there are many use cases (combinations of N-way designs, independent samples versus repeated measures) that we'd need to cover if we want to go down this path. That being said, it would be great if we could come up with a fairly general framework that would work in a good number of cases, even if it doesn't cover them all.
Currently we have 1- and 2-way repeated measures ANOVA support:
https://github.com/mne-tools/mne-python/blob/master/mne/stats/parametric.py#L102 https://github.com/mne-tools/mne-python/blob/master/mne/stats/parametric.py#L183
A possibility would be:
The second part would be harder, but it might be a great way to go. This would likely require a significant time and effort investment on your part, but it would be great to have. I'd be happy to work with you to get the parts to fit together. Let us know if you have the time and motivation to work on it.
Asto the ANOVA, it would be pretty easy to generalize the code in a few lines to support any number of factors. I'm happy to assist with this. The only thing to be added would be a sphericity correction to the F-values.
What about mixed models? (We can add this later if it would be a pain.)
Usecase? I haven't seen any efficient mass-univariate implementation so far. Btw. I recently started using t-tests, which, depending on the contrast, can also be used to compute interaction effects using a contrast coding by multiplying the columns / rows of the designmatrix that correspond to the main effects:
ˋˋˋ A : -1 -1 1 1 B : -1 1 -1 1 C : 1 -1 -1 1 ˋˋˋ The results are about equivalent while our ANOVA is much slower than our t-test, probably due to the reshaping used in the examples (could be optimized).
C means A x B in the example above
@Eric89GXL... The permutation of raw data is perfectly fine in the majority of cases, and with respect to permutation of residuals it is more a question of sensitivity/specificity tradeoff generally in the EEG signal, and between finding main effects versus interactions. So if this is the way the code is implemented, it might be worth it to see what the trade-offs and differences are with respect to EEG data at some point, but permutation of raw data is perfectly acceptable.
I've been messing around with mixed model approaches and find them much more useful for typical experimental designs, especially where data for some conditions is missing or unbalanced (typical with EEG when rejecting bad epochs). BUT, they are much slower since the estimation process is an iterative procedure, and therefore probably not the best to use in combination with TFCE and permutation analysis.
I've found that although there are simple shortcuts for estimating models with just two factors (any number of levels), the same sorts of shortcuts don't exist (or I haven't found them), for 3 or more factors. Thus meaning for 3 or more factors the whole model needs to be estimated etc, making for a much slower function (in my tests), such that TFCE and permutation calculations aren't suitable yet. But if you have code where the addition of a third factor (and hence another two two-way interactions, and a three-way interaction) doesn't take much more time then perfect.
I'm going to close this for now, we can reopen an issue if someone has a specific use case / proposal in the future
This paper talks about some efficient ways to expand permutation tests to factorial designs:
http://link.springer.com/article/10.3758%2FBRM.42.2.366#page-1
I hadn't heard of Gill's algorithm before, but I need to read about it to see if it would be useful.