Open andkov opened 8 years ago
@andkov
For some reason we report the estimate and standard error in the phys-phys tables, and the estimate and confidence intervals in the phys-cog ones.
While these should basically communicate the same information, for some reason I am finding it difficult to go back and forth between the two types of table. Oddly, the p-values (or the ***) for corr and cov seem to mostly agree in both the P-P and P-C models, and the SE seem to agree with the p-values (something is flagged as significant when the estimate is at least twice the SE). The CI in the grip-cog tables, however, often do not include 0 when the covariance information suggests that it should. Without seeing both the SE and the CI, I can't tell where the problem lies - with the CI or with the cognitive models.
Although IJE requests CI, it seems like reporting SE will be less confusing/problematic. On the other hand, before I suggest we do this, I want to be sure that there isn't something else going on.
Could you please: a) check whether you see the same (to ensure I am not just losing it) b) include both SE and CI in the same tables (for P-P and for P-C) so we can confirm whether this is where the inconsistency is showing up.
@ampiccinin
For some reason we report the estimate and standard error in the phys-phys tables, and the estimate and confidence intervals in the phys-cog ones.
This is because most of the phys-cog models do not have SE of the correlations (only CI that are computed through Fisher transform). Phys-phys did not have this problem, that's why we stuck with the available estimates SE as the better candidate. This meant to change as we get more models with estimated correlations.
While these should basically communicate the same information, for some reason I am finding it difficult to go back and forth between the two types of table.
I agree, not an ideal solution. However, with more and more more phys-cog models being re-run (with estimated correlations) there will be no need in having this difference. Again, the only reason why phys-cog was different from phys-phys was because very few phys-cog models had estimated correlation available, so we were substituting it with computed CI from Fisher transform.
include both SE and CI in the same tables (for P-P and for P-C) so we can confirm whether this is where the inconsistency is showing up.
ok. I can add the computed CI from Fisher transform easily (some earlier versions of the correlation table had them included, although it made for somewhat of a messy display), but if you mean the CI of the estimated correlation, this would require some time to adjust the script to extract them from the outputs. It shouldn't be too taxing, just would take a bit more time.
Well, the clock is ticking, but we also want accuracy. If we could even just type a few in by hand, so we can compare, that would be great.
We also need to be confident that the CI from the estimated correlation and the one based on the fisher transform are the same.
a) check whether you see the same (to ensure I am not just losing it). (The CI in the grip-cog tables, however, often do not include 0 when the covariance information suggests that it should. )
Yes, I see it too. the confidence intervals for the correlations COMPUTED THROUGH FISHER TRANSFORM often do not include 0, when the covariance shows that correlation is insignificant (implying that the interval should include zero).
b) include both SE and CI in the same tables (for P-P and for P-C) so we can confirm whether this is where the inconsistency is showing up.
I'm not sure what CI is being implied here: estimated in Mplus, or computed through Fisher transform. The new tables include both.
We also need to be confident that the CI from the estimated correlation and the one based on the fisher transform are the same.
They are not the same and they should NOT be the same in my understanding. The CI from estimated correlation assumes that R is distributed normally (which they are not). The Fisher transform adjusts for this and therefore produces different values.
@annierobi @ampiccinin
Please list the indices and parameters that you think should be extracted for more efficient and accurate model diagnostics. For each parameter, please provide an example from the existing output in this repository (you can link a line in the output).