Open 81N55E opened 7 years ago
Hey!
1) Difficult question. Power analyses are essentially working backwards from a traditional statistical test and getting to the minimum N. However with non-parametric tests (which at the statistical level the TFCE approach is), is more difficult and many more assumptions need to be made since these sorts of approaches use all the data points (and not just the mean and standard deviation like parametric tests do).
For example, see this link, to see the sort of discussion and nuance necessary when dealing with power calculations for the more straightforward test.
Of course the TFCE approach complicates this sort of picture even further since you would also have to make assumptions about the level of neighbourhood support you expect to see in your data... which I'd be super skeptical you could make with any degree of certainty.
So ultimately, you guesses as to what your data will look like will have a huge error in them. Any upper and lower bounds you might set on these with a reasonable degree of certainty would probably leave you with N estimates from say "you need either 5 participants... or 400 for this approach to be useful"... and so would be useless.
2) You always have the underlying values so you can still calculate cohen's d... and this would be accurate for what its worth. But in doing so you wouldn't be computing the amount of support for that particular channel/sample which is one of the motivations behind TFCE in the first place. But short of reinventing the wheel on this topic... I would hope that reporting the actual mean difference, and in these cases, the number of significant channels and/or time points (or frequency bins)... will give the reader all the necessary information to make a good decision on whether your effect is not only statistically interesting, but the actual difference is also of interest.
Not great answers I now... but I don't have a lot of faith in power measurements... nor in standardised effect sizes because usually the most important factor in your actual difference is dependent on other studies in the field and the consequences of that amount of difference. E.g. if I tell you slow wave amplitude increased by 15 uV for condition B (and a small cohen's d) you might not know what to make of it. If, on the other hand, you know that total sleep deprivation has a 10 uV increase then you'll know that 15 uV is a lot and can make a better judgement.
Thanks for the answer! I always really appreciate your quick and extensive replies!
I agree with what you say - however I am not sure whether trusting prior studies that might be under-powered and thus the "difference" they found are overestimated (or just do not represent the population) is something that I want to rely on to when interpreting my results.
Well, this current reply wasn't so quick...
I sort of agree with your hesitation on previous studies... yet while I agree this problem is basically all of science; especially neuroscience / psychology. Its always necessary to put your results into the context of previous work even if that work wasn't great. Any underpower in previous analyses (or any other problem with the data), can also be taken into account when you interpret of course and make arguments against in your final interpretations.
Yes, in the end you are absolutely right - we have to put into context. And reading through our conversation I think we are on the same page. In the end my whole initiative with the second part of my question was to be actually able to compare your results with that of previous ones. A p-value cannot do that. An effect size + CI however can. So in your example knowing that previous studies on slow wave amplitude increases by 10mV after sleep deprivation can be compiled into an effect size (even if its small - though relevant) . Then you can compare your current study (15mV increase - still small ES however in the range of the previous ESs) with the previous ones - and can put it into context. Something that your p-value cannot (of course your mean difference can do that - but I am not sure if people look at that if they are blinded by the p-value).
Long story short - of course I can calculate the ES and CI from the output - I just thought if it could be implemented immediately (& help me to save some time). In addition it would help to remove the attention away from the p-value (which I think is very necessary because that is one of the reasons why science is so flawed)
Hey Armand, sorry - me again. Two questions: (1) A-priori power analysis: Just as an example... in G*Power for a fixed effect with interactions ANOVA with an expected effect size of .25, alpha of .05 and Power of .80 and 2 Groups I have a suggested total sample size of 269. Would the same sample size stand with the TFCE approach? If not, how can I do a a priori power analysis for the TFCE? (2) Effect size: Is there a possibility to implement in the results-output the Effect size and CI as it tells me more about the actual effect than the p-value? Best & Thanks!, Mitja