Open JeanneCaronGuyon opened 3 years ago
Pragmatic answer: start with 1000, see how much time it takes, multiply this time by 10, see if you can afford it; if yes, go for 10000!
Methodological answer: the number of permutations N defines how precisely you estimate the null distribution (i.e the number of points you have access to to estimate your histogram / empirical null distribution); so this is why you need it to be "fairly high"... Also, because of this, it also defines the smallest p-value you're going to be able to measure with your test: if you have N=100 permutations, you're going to be able to say "p<0.01" and nothing below; if you have N=1000, it will be "p<0.001" etc.
Thank you for your answer ! I knew about the methodological interest of increasing the number of permutations but I wondered whether there was some way to determine a number of permutations that would be "ideal" ; while taking into account that we cannot always go up to the theoretically-possible number of permutations. But I guess it's more an empirical and "what one can afford" kind of choice, thanks! As I have tried both 1000 and 10 000, and it seemed reasonable time-wise, I guess I'll stick to 10 000 if possible.
As a group of 20 subjects gives a maximum of 2^20 permutations (a little more than a million), and if 10000 permutations run relatively fast, I think I would still increase the number of permutations up to 100000. Note: in the example given by the SnPM documentation (https://warwick.ac.uk/fac/sci/statistics/staff/academic-research/nichols/software/snpm/man/exnew), they use the maximum number of permutations (12 subjects : 4096 permutations).
100 000 permutations analysis performed in about 2 hours, guess it's not so bad ! But differences between the obtained results from 10 000 and 100 000 permutations are not obvious.
to me, 100 000 is an overkill in most cases ;)
How do we a priori decide on the right number of permutations to perform (when analyzing group-level statistics with SnPM) ? I have 20 subjects, and want to perform a One Sample t-test with one scan/subject which corresponds to the accuracy map obtained from a searchlight decoding analysis (MVPA ; direction decoding left vs right). SnPM recommends 10 000 permutations. What's your take on it?