Closed 0todd0000 closed 7 months ago
If you add 'force_iterations', true
to the argument list the error will disappear.
Please see #173, #106 and other issues in this forum for additional details regarding choosing iterations.
Hi,
Thanks for the links to to other issues. I have one question pertaining to the number of iterations.
1) From my understanding, increasing the number of iterations leads to convergence and also repeats for random pertubations of the dataset. I have attached an image before, where I have conducted nonparameteric paired ttest for 120,220, 1000 and 10000 iterations. The SPM{t} look qualitatively similar with p value changing based on number of iterations (1/iterations). How does this link with convergence? Have the results still not converged?
I get similar SPM{F} values for one way repeated measure ANOVA with the same variations (but it takes ages to run). I would like to understand further how the iterations may impact the results?
Thanks
In this case the effect size is very large so iterations
does not qualitatively affect the results. When effect sizes are smaller you can see very different results for different numbers of iterations and/or random number generator states.
To judge convergence (generally):
zc
) are small with respect to the difference between the maximum absolute z
value (i.e., if zc
changes are small with respect to z_max
- zc
) then there is likely convergenceThank you. Sorry for the delay but in one of the posts you mentioned that the t-statistic is qualitatively similar to the effect size. I have additionally caculated Cohen's d but would like to know if there should be correalation between SPM{t} and Cohen's d? If the value of SPM{T} is larger, would it mean we should get a larger Cohen's d value?
Thanks
Yes, test statistic values and effect sizes are (nonlinearly) correlated. There are many free resources available for considering test statistics vs. effect sizes, including this one at medium.com
I suggest plotting the Cohen's d trajectory. It will look qualitatively quite similar to the t-statistic trajectory.
Thank you. I will do that
Hello Todd,
Thank you. Cohen's d value did qualitatively match that of the t-statistic. I would like to clarify two additional questions:
1) If Cohen's d value ranges from 0.05 to 0.5, would you use the maximum value and state the effect size is moderate? In some papers, the calculate Cohen's d values at the point of peak difference but I do have a non-directed analysis. How would you suggest the effect size is reported?
2) In some of the previous tickets, there was a mention of effect size being added to spm code. Has it been added? I am currently using Matlab's meanEffectSize function to calculate Cohen's d.
Thanks
Preliminary effect sizes will be supported in the next major release (version 0.5). I recommend against following standard effect size interpretations like "moderate" because one is much more likely to observe a "moderate" effect when analyzing 1D data.
Thank you. What would you recommend is the best way to state effect sizes for 1d data? Would stating the range make more sense ?
Yes, I think the range would be better. An academic paper about this issue, along with interpretation guidelines is currently being prepared; if review goes smoothly these should be available some time in 2024.
Hi Todd,
Thanks. I look forward to that paper. I would like to clarify one question about it since stating Cohen's d would allow me to complete this project
1) Since I am performing paired t-test between two groups contained 501 samples each and I am trying to determine the effect size of the significant differences: would stating the range of Cohen's d across the time series be better or would providing a single value of Cohen's d comparing the medians between the two samples be better (in the specific time range)?
Thanks
I can't answer this question because there are no relevant effect size reporting standards of which I am aware.
From #188
For number of iterations, if I use -1, I get the error :
The total number of iterations (Inf) is very large and may cause computational problems. In the call to "inference"... (1) Set "iterations" to a number between 10 and 10,000, or (2) set "force_iterations" to "true" to enable calculations for iterations > 10,000.
What is the minimum number of iterations necessary and how does it alter the result?
Thanks, Vignesh