sparsemat / sprs

sparse linear algebra library for rust
Apache License 2.0
415 stars 47 forks source link

no lower triangular matrix factorization #333

Open Lishen1 opened 1 year ago

Lishen1 commented 1 year ago

As i see sprs-ldl dont's support sparse matrices with only lower triangular matrix. lower triangular matrix is useful in practice : Cholesky decomposition usually use for solve A'Ax=A'b, so A'A already symmetric and no reason to compute upper triangular part.

mulimoen commented 1 year ago

I don't think I get the question here. Are you asking to create the L or solve a matrix problem where you have precomputed L?

Lishen1 commented 1 year ago

original test:

    fn cuthill_ldl_solve() {
        let mat = CsMat::new_csc(
            (4, 4),
            vec![0, 2, 4, 6, 8],
            vec![0, 3, 1, 2, 1, 2, 0, 3],
            vec![1., 2., 21., 6., 6., 2., 2., 8.],
        );

        let b = ndarray::arr1(&[9., 60., 18., 34.]);
        let x0 = ndarray::arr1(&[1., 2., 3., 4.]);

        let ldlt = super::Ldl::new()
            .check_symmetry(super::SymmetryCheck::DontCheckSymmetry)
            .fill_in_reduction(super::FillInReduction::ReverseCuthillMcKee)
            .numeric(mat.view())
            .unwrap();
        let x = ldlt.solve(b.view());
        assert_eq!(x, x0);
    }

what i want:

    #[test]
    fn cuthill_ldl_solve() {
        let mat = CsMat::new_csc(
            (4, 4),
            vec![0, 1, 2, 4, 6],
            vec![0, 1, 1, 2, 0, 3],
            vec![1., 21., 6., 2., 2., 8.],
        );

        let b = ndarray::arr1(&[9., 60., 18., 34.]);
        let x0 = ndarray::arr1(&[1., 2., 3., 4.]);

        let ldlt = super::Ldl::new()
            .check_symmetry(super::SymmetryCheck::DontCheckSymmetry)
            .fill_in_reduction(super::FillInReduction::ReverseCuthillMcKee)
            .numeric(mat.view())
            .unwrap();
        let x = ldlt.solve(b.view());
        assert_eq!(x, x0);
    }

difference is: originally we have sparse symmetric matrix

            (4, 4),
            vec![0, 2, 4, 6, 8],
            vec![0, 3, 1, 2, 1, 2, 0, 3],
            vec![1., 2., 21., 6., 6., 2., 2., 8.],
);

but will be usefull to set only lower triangular part since upper triangular part just duplicate lower one:

            (4, 4),
             vec![0, 1, 2, 4, 6],
            vec![0, 1, 1, 2, 0, 3],
            vec![1., 21., 6., 2., 2., 8.],
);

for example this implemented in https://eigen.tuxfamily.org/dox/classEigen_1_1SimplicialLLT.html

UpLo_ the triangular part that will be used for the computations. It can be Lower or Upper. Default is Lower.

mulimoen commented 1 year ago

If you already have L then I'm still confused in how you expect an ldl factorisation to work. You should be able to use a combination of diag_solve and `ldl_solve' to solve the problem. We don't have any functions to do this directly, PRs are welcome!

Lishen1 commented 1 year ago

no, i don't have L (in term on cholesky LLT). I just have some matrix H that's SPD. H computed from J'J (' is transpose). since i know that results of J'J is symmetrical i don't compute all elements of H. only lower part of H. and expect that chol don't need to acces to upper part of H since cholesky work only with SPD. I don't really understend why cholesky factorisation routine need to check symmetry of H instead of access only to lower part and keep in mind that upper one in symmetrical.

mulimoen commented 1 year ago

Ok, I understand a bit more what you mean now. Not that I have checked for correctness, but you could "cheat" by using a different FillInReduction.

    #[test]
    fn cuthill_ldl_solve_lower() {
        let mat = CsMat::new_csc(
            (4, 4),
            vec![0, 1, 2, 4, 6],
            vec![0, 1, 1, 2, 0, 3],
            vec![1., 21., 6., 2., 2., 8.],
        );
        let mat = mat.transpose_view();

        let b = ndarray::arr1(&[9., 60., 18., 34.]);
        let x0 = ndarray::arr1(&[1., 2., 3., 4.]);

        let ldlt = super::Ldl::new()
            .check_symmetry(super::SymmetryCheck::DontCheckSymmetry)
            .fill_in_reduction(super::FillInReduction::CAMDSuiteSparse)
            // or use
            // .fill_in_reduction(super::FillInReduction::NoReduction)
            .numeric(mat.view())
            .unwrap();
        let x = ldlt.solve(b.view());
        assert_eq!(x, x0);
    }

This is not officially supported and I am not familiar enough with the code to tell if it makes sense to do this or just works for this choice of matrix.

Lishen1 commented 1 year ago

Thanks. i'll check it on my matrix. probably AMD ordering allows it (Eigens implementation use AMD ordering too)

Lishen1 commented 1 year ago

checked with fill_in_reduction(super::FillInReduction::NoReduction) - not worked. incorrect solution