modelon-community / Assimulo

Assimulo is a simulation package for solving ordinary differential equations.
https://jmodelica.org/assimulo/index.html
GNU Lesser General Public License v3.0
66 stars 16 forks source link

Implementation of the `CVode` solver while updating `atol` #77

Closed kmaitreys closed 2 months ago

kmaitreys commented 7 months ago

I am having difficulty in understanding how do I implement the CVode solver in the way I want:

What I want?

I have a number of time points for which I want to simulate (basically ncp_list of the simulate() method). Between these time points, I want to modify the state of the solver in some way. Those ways are as follows:

  1. Modify the atol for each equation
  2. Modify a variable which is an arguement to the rhs_function, so when simulate() calls it next time, it's updated value is used.

Why do I want this?

I am trying to solve a very stiff system (chemical kinetics) with CVode in hopes of greater robustness and speed. I have two implementations available as a benchmark. One is an old Fortran one which uses DLSODES and one is that I have written which uses scipy.integrate.ode with the LSODA method. Note that LSODA is different that DLSODES as in the former automatically switches between a stiff and non-stiff solver but the latter always uses a stiff solver (BDF, I think). I have heard that CVode is arguably better than DLSODES so I wanted to try it.

What I have done so far?

I wrote the following code (only the relevant parts):

...
exp_mod = Explicit_Problem(dndt_wrapper, n0,)
exp_mod.jac = jacobian_wrapper
exp_mod.jac_nnz = dum_jac.size # A dummy call of `jacobian` which `jacobian_wrapper` wraps around
exp_mod.maxncf = 100
exp_sim = CVode(exp_mod)

exp_sim.iter = 'Newton'
exp_sim.discr = 'BDF'
exp_sim.atol = atol
exp_sim.rtol = rtol
exp_sim.linear_solver = 'sparse'
exp_sim.usesens = False

for i, time_step in enumerate(times):
    t, y = exp_sim.simulate(time_step)
    n = y[-1]
    n[n < min_val] = min_val
    exp_sim.atol = np.maximum(1e-25, 1e-5*n) # Modifying atol
    exp_sim.initialize_options()
    argument_to_rhs = some_func(argument_to_rhs) # Here modify an variable which is an argument to the RHS function

Here are the RHS and Jacobian functions (wrappers):

def dndt_wrapper(t, n, argument_to_rhs):
    dn = dndt(argument_to_rhs)
    return dn

def jacobian_wrapper(t, n, argument_to_rhs):
    J = jacobian(argument_to_rhs)
    J = csc_matrix(J)
    return J

This works only for first time step (which is a very small value) and then it fails with the following error:

assimulo.solvers.sundials.CVodeError: 'Convergence test failures occurred too many times during one internal time step or minimum step size was reached. At time 6656115.837052.'

One more thing you can perhaps note that I am supplying the size of my 2-D Jacobian matrix to nnz which is obviously not the number of non-zero values. When I did the following however:

exp_mod.jac_nnz = np.count_nonzero(dum_jac)

I got the following error:

assimulo.exception.AssimuloException: The Jacobian has more entries than supplied to the problem class via 'jac_nnz'

An equivalent working scipy implementation

sol = ode(dndt_wrapper, jacobian_wrapper)
sol.set_integrator('lsoda', rtol = rtol, atol = atol, nsteps = 2000)
sol.set_initial_value(n0, 0)
while s < len(times):
    sol.integrate(times[s])
    n = sol.y
    n[n < min_val] = min_val
    atol = np.maximum(1e-25, 1e-5*n)
    sol._integrator.call_args[1] = atol
    argument_to_rhs = some_func(argument_to_rhs) # Here modify an variable which is an argument to the RHS function
    s += 1
PeterMeisrimelModelon commented 7 months ago

Hi kmaitreys,

I do not fully understand the interaction with argument_to_rhs based on the code snippets you provide. However, I would advise to create your own implementation of an Explicit_Problem class and use class attributes for additional parameters in your right-hand side or jacobian.

Here an example:

from assimulo.problem import Explicit_Problem
from assimulo.solvers import CVode
import numpy as np
import matplotlib.pyplot as pl
pl.close("all")

class MyProb(Explicit_Problem):
    param = 0
    y0 = np.array([1.])

    def set_my_param(self, param):
        self.param = param

    def rhs(self, t, y):
        return self.param * y

prob = MyProb()
sim = CVode(prob)

pl.figure()
for i, t_end in enumerate([1, 2, 3]):
    prob.set_my_param(i) # update problem/rhs
    sim.atol = np.array([10**-(4+i)]) # update CVode options

    res = sim.simulate(t_end)
    pl.plot(*res, label = str(i))

pl.legend()
pl.show()

Resulting plot: image

From the output you should also be able to see that the absolute tolerance was indeed changed for each (sub)simulation.

As for the sparse Jacobian question:

  1. Try with a dense Jacaobian first (or even without Jacobian). This will give you a baseline for comparison of correctness and for performance. Using sparse Jacobians will not always make your solver better/faster.
  2. The csc dense format consists of 3 vectors; 1 of length (n +1) and 2 of length #nnz. This format needs to be consistent over the whole simulation. A Jacobian may have non-structural zeros at times (i.e., values that just happen to be zero for a specific set of inputs). Taking the dense Jacobian and converting it to a sparse matrix will eliminate such non-structural zeros and create a non-consistent output format and cause the issues you experience.

I hope this is helpful, let me know if there are any follow-up questions.

kmaitreys commented 7 months ago

@PeterMeisrimelModelon Thank you for such a detailed reply. I tried your methods, but still couldn't get it working. I will admit it's probably due to me not understanding exactly what I am doing.

However, I would like to have another go at it in the next few days. So, if possible, please keep this issue open so I can visit later and we can discuss further if necessary.

PeterMeisrimelModelon commented 2 months ago

Closing due to being resolved/inactivity. Please re-open/reply if otherwise.