Closed Shivangi2208 closed 1 year ago
p_critical
is the adjusted critical probability. For a single test: (a) the critical probability is usually alpha = 0.05
, and (b)
p_critical = alpha
. When conducting multiple tests one lowers this threshold to control for false positives across all tests. The difference between alpha
and p_critical
is that alpha
is the false positive rate (across all tests), and p_critical
is the probability threshold that needs to be applied to each individual test to control for alpha
across all tests.
p_corrected
is a p-value that has been corrected according to the inverse operation that was used to calculate p_critical
. Imagine that there are two tests and that one uses alpha=0.05
and p_critical=0.025
. Also imagine that the uncorrected p-values (i.e., the p-values that you obtain if you don't consider multiple tests) are p=0.025
for both tests. These p-values should be corrected to p=0.05
so that they more accurately represent the probability of obtaining a result like that over two tests. In other words, p=0.025
over-represents the effect size and p=0.05
accurately represents the effect size in this case.
So you use p_critical
when conducting the test and calculating p-values, then you use p_corrected
to correct the calculated p-values.
At the moment spm1d unfortunately only supports Bonferroni corrections.
Thank you, Todd. Initially, I used the p_critical values to perform the analysis but I did not get any significant results. Now when I perform the tests using p_corrected values I get significant results. Please note that p_critical is 0.017 and p_corrected comes out to be 0.05. I am not sure if I understood the usage of both correctly? Kindly guide.
%(1) Conduct SPM analysis: t21 = spm1d.stats.ttest_paired(Q, P); t32 = spm1d.stats.ttest_paired(R, Q); t31 = spm1d.stats.ttest_paired(R, P); % inference: alpha = 0.05; nTests = 3; p_critical = spm1d.util.p_critical_bonf(alpha, nTests); p_corrected = spm1d.util.p_corrected_bonf(p_critical, nTests); t21i = t21.inference(p_corrected, 'two_tailed',true); t32i = t32.inference(p_corrected, 'two_tailed',true); t31i = t31.inference(p_corrected, 'two_tailed',true);
Regards, Shivangi
p_critical
should be with the inference
method.
The inference
procedure calculates various p-values, and these should be corrected using spm1d.util.p_corrected_bonf
.
Okay. So If I understand you correctly, p_corrected values are not utilized in the spm analyses, but are just computed to update the new p value. The calculations are based on the new alpha value derived after the bonferroni correction. This alpha is used to see if the null hypothesis is being rejected or not. Once rejected, the new p values are computed using the spm1d.util.p_corrected_bonf. Have I put my understanding to correct use. Please check
%(1) Conduct SPM analysis: t21 = spm1d.stats.ttest_paired(Q, P); t32 = spm1d.stats.ttest_paired(R, Q); t31 = spm1d.stats.ttest_paired(R, P); % inference: alpha = 0.05; nTests = 3; p_critical = spm1d.util.p_critical_bonf(alpha, nTests);
t21i = t21.inference(p_critical, 'two_tailed',true); t32i = t32.inference(p_critical, 'two_tailed',true); t31i = t31.inference(p_critical, 'two_tailed',true);
% STEP 3: Compute corrected p values for one of the test t21
n = numel( t21i.p ); %number of upcrossings
p_corr = zeros(1,n); %corrected p values
for i=1:n
p_corr(i) = spm1d.util.p_corrected_bonf(t21i.p(i), nTests);
end
t21i.p = p_corr;
% STEP 4: Plot (with corrected p values)
t21i.plot();
t21i.plot_threshold_label();
which outputs: t21i =
SPM{t} inference z: [1x101 double] df: [1 9] fwhm: 3.1446 resels: [1 31.8002] alpha: 0.0170 zstar: 6.1543 h0reject: 1 p_set: 2.6379e-13 p: [1.3323e-15 3.0818e-06]
and the plot looks like this
Regards, Shivangi
Yes, your interpretation and script look fine.
There is a minor exception: p_critical
pertains to field-level inference and not to cluster-level inference. Your script above applies p_corrected
to cluster-level p-values. There may be a theoretical problem with applying a field-level correction to cluster-level results. However, I would expect a robust theoretical solution to produce only negligibly different results than the ones your script produces. So I don't expect any practical or interpretive consequences with this approach.
Regardless, this field-level vs. cluster-level difficulty is one reason spm1d does not implement multiple testing procedures including, for example: automated post hoc analyses for ANOVA.
Thank you Todd for all your help.
Regards, Shivangi
Hello Todd, Sorry to disturb you again but I had one query regarding using corrections to reduce the chances of false positives in multiple t-tests. Could you kindly explain the difference between p_critical and p_corrected? When do we use which? Are there any other corrections available in spm apart from bonferroni?
Regards, Shivangi