Open jgomezdans opened 10 years ago
Recent changes (too lazy to look for them) have solved this problem. From 6-7Gb, we're down to 1.5Gb. The last hurdle was the calculation of the inverse of the Hessian, a matrix that is both big and not sparse. In here, I have solved this matrix row by row, tested for small elements, and stored these in a lil_matrix
. However, as we then need to extract the diagonal, it all crawls to a halt when extracting things. Maybe converting post_cov
to a dia_matrix
improves on this, but I haven't tested it.
@NPounder can you try this version and see whether (i) it works for you (ii) you have any inspiration to speed up the calculation of the Inverse Hessian/posterior standard deviations?
An issue that @NPounder identified regarding memory use...
The prior inverse covariance is defined as a dense matrix, so it does use far too much memory. The situation gets worse in the Hessian calculations, where a lot of the prior and model Hessians start as dense matrices.
The solution to this problem (apart from getting more memory ;-D) is to use sparse matrices throughout. For example, in the prior constructor, the inverse covariance can be readily transformed to a sparse matrix (if not defined as one before). The
der_cost
calculations are then much simpler (they are trapped in the firstif
statement). This solves the Hessian problems as well.For the other components, we cannot initialise the hessian as a full matrix, but rather as a sparse one. This saves allocating lots of memory for what is basically a matrix of 0s