bjodah / pycvodes

Python wrapper around cvodes (from the sundials library)
BSD 2-Clause "Simplified" License
35 stars 5 forks source link

`jac` signature for sparse Jacobians #143

Closed kmaitreys closed 5 months ago

kmaitreys commented 5 months ago

In the documentation of integate_adaptive, the signature of the Jacobians function should be

jac : callable
        Function with signature either jac(t, y, jmat_out, dfdx_out) for
        dense/banded jacobians, or jac(t, y, data, colptrs, rowvals) for
        sparse (CSC) jacobians. ``jac`` should modify ``jmat_out``, ``dfdx_out``
        (dense, banded) or (``data``, ``colptrs``, ``rowvals``) *inplace*.
        (see also ``lband``, ``uband``, ``nnz``)

Can you describe how to write the Jacobian function for a sparse Jacobian matrix? For dense the examples helped but for sparse I'm having difficulty in understanding how can I implement it.

bjodah commented 5 months ago

If you look here: https://github.com/bjodah/pycvodes/blob/e93a2f19b99e6106d5ea6d3f644b3bb6be71e125/pycvodes/tests/test_cvodes_numpy.py#L775

you see a call to j_sparse. If you scroll up you can see how that function is implemented. I think it is compatible with csc_matrix in scipy, so that documentation should probably also help: https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csc_matrix.html

Let me know if this helps, and if you can think of any improvements to the current docstring.

kmaitreys commented 5 months ago

Thank you @bjodah for the reply. I tried implementing the way it is implemented in the tests, but I could not get it working. This was my implementation, please tell what I am might be doing wrong

def _get_f_j(n, dndt_cvode, jacobian_cvode, extra_params):

    csc_jac = csc_matrix(jacobian_cvode(n, extra_params))
    _data = csc_jac.data
    _colptrs = csc_jac.indptr
    _rowvals = csc_jac.indices

    def f(t, n, dn):
        n = np.array(n)
        dn[:] = dndt_cvode(n, extra_params)

    def j(t,n, data, colptrs, rowvals):
        data[:] = _data
        colptrs[:] = _colptrs
        rowvals[:] = _rowvals

    return f, j, len(_rowvals)

dndt_cvode and jacobian_cvode are my expensive rha and jac functions which have been JIT-compiled using Numba.

The integration logic is written like this:

while s < len(times) -1:
        # old_y = sol.y
        _f, _j, _nnz = _get_f_j(
            n,
            dndt_cvode,
            jacobian_cvode,
            extra_params
        )

        tout, yout, nfo = integrate_adaptive(
            _f,
            _j,
            n,
            times[s],
            times[s + 1],
            atol = atol,
            rtol = rtol,
            method = 'bdf',
            nsteps = 2000,
            nnz = _nnz,
                        linear_solver = 'klu',
        )

        print(s, f': Time - {times[s]/YEAR:13.2e} years')
        n = yout[-1]

        atol = np.maximum(1e-25, 1e-5*n) # modifying atol for speed
                extra_params = modify_extra(extra_params)

        s += 1

After few warnings like:

[CVODES WARNING]  CVode
  Internal t = 3.15576e+07 and h = 1.02104e-09 are such that t + h = t on the next step. The solver will continue anyway.

[CVODES WARNING]  CVode
  Internal t = 3.15576e+07 and h = 1.02104e-09 are such that t + h = t on the next step. The solver will continue anyway.

[CVODES WARNING]  CVode
  Internal t = 3.15576e+07 and h = 1.60511e-09 are such that t + h = t on the next step. The solver will continue anyway.

it failed with the following error:

RuntimeError: Maximum number of steps reached (at t=3.156903e+07): 2000
bjodah commented 5 months ago

At first glance I think it looks about right. Are you sure that the Jacobian is actually incorrect here? (there may be plenty other reasons for the solver starting to take small time steps.)

kmaitreys commented 5 months ago

And for the dense Jacobian, I have been running into lack of reproducibility issue. I install my program using pip in my conda environment. Now when I import it and run it, it produces the correct results for the first run, but subsequent runs fails with CV_CONV_FAILURE. This only goes away when I re-install my library. Do you have any idea what might be causing this?

kmaitreys commented 5 months ago

At first glance I think it looks about right. Are you sure that the Jacobian is actually incorrect here? (there may be plenty other reasons for the solver starting to take small time steps.)

I don't think the Jacobian is incorrect, in fact, scipy.integrate.ode's lsoda produces the correct answer without any problem. However, lsoda does not give as much as freedom as lsodes (whose Python wrappers do not exist) and CVODE that is why I am trying cvodes. And then I found this excellent library.

kmaitreys commented 5 months ago

And for the dense Jacobian, I have been running into lack of reproducibility issue. I install my program using pip in my conda environment. Now when I import it and run it, it produces the correct results for the first run, but subsequent runs fails with CV_CONV_FAILURE. This only goes away when I re-install my library. Do you have any idea what might be causing this?

Further, if I set autorestart = 4 then it I do not get an error like this:

RuntimeError: Unsuccessful step (t=6.994040e+06, h=6.587900e-02): CV_CONV_FAILURE

rather, I get totally wrong results instead. So first time produces expected result, subsequent runs either are unsuccessful or produce wrong results.

bjodah commented 5 months ago

The reproducibility issue sounds worrisome. Are you using pycovdes-0.14.5?

Does the problem only occur when using numba, or also without using numba?

bjodah commented 5 months ago

I think the dense jacobian might need to have its zero entries explicitly set. That is, you need to write 0 to the empty parts of the matrix.

kmaitreys commented 5 months ago

The reproducibility issue sounds worrisome. Are you using pycovdes-0.14.5?

Yes.

Does the problem only occur when using numba, or also without using numba?

This I will have to try. It is very possible it might be because of Numba. I will get back to you.

kmaitreys commented 5 months ago

I think the dense jacobian might need to have its zero entries explicitly set. That is, you need to write 0 to the empty parts of the matrix.

So the data[:] = _data thing will not work?

kmaitreys commented 5 months ago

Does the problem only occur when using numba, or also without using numba?

I could not try without Numba because that takes forever. However, it does not happen all the time. Currently, it is happening for some input value to the solver (part of extra_params) but does not happen if I change the value to something else.

EDIT: Managed to solve it. Turns out I cannot use cache = True with njit for my expensive rhs and jacobian functions.

bjodah commented 5 months ago

I see. Do you think the problem with irreproducibility remains if you modify your problem so that it is smaller? (so that it becomes possible to test without numba). If the problem is in pycvodes, I would greatly benefit from a test case that is as small as possible, while still exhibiting the non-deterministic behavior.

bjodah commented 5 months ago

Interesting. So that points in the direction of numba as the source of error. Perhaps they would be interested in getting a report about that?

kmaitreys commented 5 months ago

Interesting. So that points in the direction of numba as the source of error. Perhaps they would be interested in getting a report about that?

I think that might be intended behaviour. Treating some variables as compile time constants. I could be wrong though. Numba's caching just prevents JIT compiling functions every time a script is run and using the precompiled signature.

kmaitreys commented 5 months ago

I see. Do you think the problem with irreproducibility remains if you modify your problem so that it is smaller? (so that it becomes possible to test without numba). If the problem is in pycvodes, I would greatly benefit from a test case that is as small as possible, while still exhibiting the non-deterministic behavior.

Yes so, I have tested and made sure, the problem was solely due to numba's caching, pycovdes is fine. Coming back to the sparse jacobian thing though. I did not get what exactly you meant regarding it?

bjodah commented 5 months ago

How the sparse jacobian should be built up? I haven't used this functionality much myself actually, that's why I linked to the tests. I did review the code though, way back. I remember that I found the wiki article informative:

https://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_column_(CSC_or_CCS)

You might also want to take a look into KLU's documentation: https://github.com/DrTimothyAldenDavis/SuiteSparse/tree/dev/KLU/Doc

More specifically, we use KLU via sundials: https://sundials.readthedocs.io/en/latest/sunlinsol/SUNLinSol_links.html#sunlinsol-klu-description

which we use "anyode" around: https://github.com/bjodah/anyode/blob/ac350a937b0fe74a4da4f9b6f983c982b1cce0d2/include/anyode/anyode_numpy.hpp#L277

My suggestion is to try to manually rederive the jacobian from the test case, by only looking at its dense representation, and then see if you agree with its sparse representation. If you are convinced that we are doing something wrong in that test case, you should probably be able to apply a fix to the test case and be able to observe better solver statistics (fewer rhs evaluations, fewer jac evaluations etc.).

kmaitreys commented 5 months ago

Thanks a lot @bjodah for all these references. I'll go through them and some of them should definitely help!

kmaitreys commented 5 months ago

Hi @bjodah, I tried my best but in the end I could not get what I wanted from pycvodes. Thanks a lot for helping me in the right direction though. Cheers!

bjodah commented 5 months ago

I understand. Good luck!