Open josef-pkt opened 6 years ago
bump
The usual test for paired observations use the difference directly, e.g. use just a one-sample t-test on the difference between paired observations.
For trimmed or winsorized mean comparisons, Wilcox uses the test with explicit correlation. e.g. p. 196 in Wilcox, Rand R. 2012. Introduction to Robust Estimation and Hypothesis Testing. 3rd ed. Statistical Modeling and Decision Science. Amsterdam ; Boston: Academic Press. used in PR #6526
(I never realized before) This is essentially the same difference as between using a mixed/random effects model versus a fixed effects model that differences out the individual heterogeneity in panel data.
I guess, we can add a t_test paired with two methods, differencing and estimating correlation.
I don't know what the practical difference in the results is.
Related: with have the "conditional logit" version of comparing paired proportions McNemar test.
For probit, there also exists the full MLE for bivariate, correlated random variables.
Poisson (and other LEF with canonical link) also allow conditioning to remove nuisance parameters, but I don't remember having seen a paired test
for that.
just a thought related to similar issues
Maybe using correlation in paired samples corresponds to cluster robust standard errors. similar to heteroscedasticity HC2 corresponds to unequal variance 2-sample t-test and to Welch anova without welch correction https://github.com/statsmodels/statsmodels/pull/6526#issuecomment-589849538
It's not clear to me whether we need GLS/random effects and similar for the mean/effect estimate, and when we just need to adjust the variance.
If we use correlation, then it might be easier to work with contrasts/constraints and a covariance matrix similar to cov_params in the Results.t_test for models.
We have standalone ztest and ttest for independent and paired samples, but we don't have standalone function for correlated moments. This is similar to t_test or wald_test in models where we have the full covariance matrix to test correlated parameters.
example: testing or comparing 2 correlations with one variable in common, then the two correlation coefficients are correlated.
simple function based on summary statistic that can be extended to one-sided alternatives and to full inference mixin. (Written for a test case on testing correlations.)