Closed jeromyanglim closed 11 years ago
Equations 3, 4, and 5 in Olkin and Finn (1995) provide the formulas assuming rho. It sounds like it is common to replace rho with sample estimates of the correlations. Presumably this is more reasonable as the sample size used to estimate the correlations gets larger.
Let $R$ be a $p \times p$ correlation matrix, where there are $m=p(p-1)/2$ correlations. Thus, the covariance matrix $\Omega$ of the sample estimates of the correlation matrix is an $m \times m$ matrix.
There are several different logical ways of extracting correlations from a $p \times p$ matrix into a vector $\mathbf{r}$. The lower triangle or upper triangle can be extracted, and the extraction can either occur row-wise or column wise.
A general representation in R is to have a data.frame where each row represents a correlation with four variables, matrix_row
, matrix_col
, vector_index
, and r
.
Olkin, I. & Finn, J.D. (1995). Correlations redux.. Psychological Bulletin, 118, 155.
Mike Cheung's R package metaSEM
provides the asyCov
function which estimates the asymptotic sampling covariance or a correlation/covariance matrix.
See the example. E.g.,
C1 <- matrix(c(1,0.5,0.4,0.5,1,0.2,0.4,0.2,1), ncol=3)
asyCov(C1, n=100)
Which produces the following output:
x2x1 x3x1 x3x2
x2x1 4.545457e-03 2.824226e-09 0.001818184
x3x1 2.824226e-09 6.144200e-03 0.003072097
x3x2 1.818184e-03 3.072097e-03 0.008626946
This is a required step in Cheung and Chan (2005). However, they do not specify how to obtain such an estimate.
References: