convexfi / spectralGraphTopology

Structured Graph Learning via Laplacian Spectral Constraints (NeurIPS 2019)
https://CRAN.R-project.org/package=spectralGraphTopology
GNU General Public License v3.0
59 stars 17 forks source link

graphical lasso example #2

Closed mirca closed 5 years ago

mirca commented 6 years ago

@dppalomar I think I'm missing something on the glasso package? The estimated covariance matrix is similiar to the sample covariance matrix, however, their generalized inverse are quite different. See the following example:

library(spectralGraphTopology)
library(glasso)

set.seed(123)
w <- c(1, 2, 3)
Lw <- L(w)
Y <- MASS::mvrnorm(100, rep(0, nrow(Lw)), MASS::ginv(Lw))

# True Laplacian Matrix
Lw
#      [,1] [,2] [,3]
# [1,]    3   -1   -2
# [2,]   -1    4   -3
# [3,]   -2   -3    5

# Sample Covariance Matrix
cov(Y)
#             [,1]        [,2]        [,3]
# [1,]  0.12937250 -0.07160465 -0.05776785
# [2,] -0.07160465  0.10031397 -0.02870933
# [3,] -0.05776785 -0.02870933  0.08647718

# Naive Estimator
MASS::ginv(cov(Y))
#          [,1]      [,2]      [,3]
# [1,]  3.456331 -1.434415 -2.021916
# [2,] -1.434415  4.690137 -3.255723
# [3,] -2.021916 -3.255723  5.277639

# glasso
gl <- glasso(cov(Y), rho = 1e-2)
gl

# $w (Graphical Lasso Covariance Matrix)
#             [,1]        [,2]        [,3]
# [1,]  0.13937250 -0.06160457 -0.04776738
# [2,] -0.06160457  0.11031397 -0.01870933
# [3,] -0.04776738 -0.01870933  0.09647718
#
# $wi (Graphical Lasso Inverse Covariance Matrix)
#           [,1]      [,2]      [,3]
# [1,] 14.567376  9.676671  9.089026
# [2,]  9.676661 15.801224  7.855262
# [3,]  9.089156  7.855371 16.388597
dppalomar commented 6 years ago

@mirca glasso is only for nonsingular covariance matrices. In our case, it is singular so the inverse becomes large trying to accommodate for that zero eigenvalue of the covariance matrix (so that when you take the inverse you get a very small eigenvalue). With real data, the sample covariance matrix will not be singular. Perhaps the synthetic data we are generating should not be singular either. We need to think the generation part a bit more...

sandeep0kr commented 6 years ago

The following paper points out the interesting explanation

" Estimation of positive definite M-matrices and structure learning for attractive Gaussian Markov Random fields".

https://arxiv.org/abs/1404.6640

Page 7-10. At a high level, it states the following consequences. Originally the Gaussian Graphical modelling (GGM) assumes the matrix to be positive definite. Note it, just positive definiteness without any constraints on the signs of their elements. But, in order to specialize the formulation for Laplacian learning the formulation is relaxed to positive semidefinite and also a negative sign on the elements. This relaxation brings biases and is also known as the model misspecification previously mentioned by @dppalomar. The GGM model with sign constrained and positive semidefinite constraints is no longer MLE which is also known as sign-constrained log-determinant divergence minimization. The precision matrix estimation under this model may be subject to substantial bias, as observed in our case. I need to understand it further in detail. For the time being it is clear that the issue is inevitable. For the data generation technique we should stick to the justification as carried in

"Graph Learning From Data Under Laplacian and Structural Constraints".

The above paper conisedrs the Laplacian structured constraints and can serve as an initial reference for K=1( one component) graph.