nilearn / nilearn

Machine learning for NeuroImaging in Python
http://nilearn.github.io
Other
1.17k stars 590 forks source link

[discussion] LedoitWolf vs EmpiricalCovariance in practice #1539

Closed salma1601 closed 3 years ago

salma1601 commented 6 years ago

I am trying to illustrate the benefit of LedoitWolf vs EmpiricalCovariance in connectivity estimation, but when simulating multivariate Gaussian signals with a given true covariance I have the impression LedoitWolf underestimates the saliant patterns although the overall error is lower.

Example here with true covariance from fMRI data, left-right connections are on sub-diagonals and deviate from true values more with LedoitWolf

bthirion commented 6 years ago

Is the true matrix poorly conditioned ? If so, you expect the LW estimate to be quite poor. Interme of SSE LW is worse than empirical ? that would be suprising.

salma1601 commented 6 years ago

the true matrix has condition number 134. In term of SSE LW is better, I just don't like the fact that the left-right connections are better estimated with empirical

bthirion commented 6 years ago

Sure, but the l2 penalty is mercyless for meaningful details. What happens with less catastrophic conditioning of the true matrix (e.g. < 10) ?

salma1601 commented 6 years ago

SSE is much better for LW, but patterns estimation is even worse ! Now also the little blacks are better estimated with empirical, and left-right is worse

bthirion commented 3 years ago

No obvious solution to this issue.