Closed gph82 closed 9 years ago
I wouldn't expect that the estimation returns a matrix filled with zeros. We have to investigate this.
probably it gets preallocated and never filled?
Which would mean that during the iterations (of the estimator) the matrix is never changed? That sound unlikely.
Ah, that was actually my mistake. If the sparse estimator fails with a warning, it always returns a zero matrix. I will fix this.
Thanks! i think we also need a fallback-to-dense behaviour if sparse-MLE does not converge.
The reasons for not converging in the sparse mode are another issue
I think the criteria for convergence in sparse and dense mode are still different. I have to adapt the sparse implementation. I don't think that a fall-back will be necessary then.
ok, thanks!
On 07/10/2015 08:15 AM, fabian-paul wrote:
I think the criteria for convergence in sparse and dense mode are still different. I have to adapt the sparse implementation. I don't think that a fall-back will be necessary then.
— Reply to this email directly or view it on GitHub https://github.com/markovmodel/PyEMMA/issues/403#issuecomment-120244267.
Dr. Guillermo Pérez-Hernández Freie Universität Berlin Institute for Mathematics Arnimallee 6 D-14195 Berlin tel 0049 30 838 75776
This didn't get closed automatically.
I have a count-matrix for which sparse-MLE-estimation fails but dense-MLE-estimation succeeds:
Apart from looking into why one converges and the other does not, I am interested in the behavior of [msm.its]. If parse the dtrajs and lagtime behind said count-matrix to [msm/its], it fails (see error below). Why? Because [analysis/dense/decomposition] is trying to operate on a matrix full of zeroes (which is what the sparse estimator returns after a non-converged run).
How can one handle best this (and similar) exceptions within [msm/its]?