Closed testcan closed 1 year ago
In your jacv
function, can you try using return np.dot(Jac, v)
instead?
Hello,
I got this error,
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2023.2\plugins\python-ce\helpers\pydev\pydevconsole.py", line 364, in runcode
coro = func()
File "", line 1, in
Thanks for your help,
It could be that your perturbations in the Jacobian are too large/small, depending on the size of your states and a fixed 1e-7 may not be suitable.
One common approach is eps_i = sqrt(np.finfo(float).eps)*|y_i|
Hi,
Just for my understanding, something like this?
def jacv(t, y, fy, v): Jac = np.empty([nsp, nsp]) dy_perturbed_neg = f(t, y) for i in range(nsp): *eps_i = sqrt(np.finfo(float).eps) abs(y[i])* y_perturbed = np.copy(y) y_perturbed[i] = y_perturbed[i] + eps_i dy_perturbed_pos = f(t, y_perturbed) Jac[:, i] = (dy_perturbed_pos - dy_perturbed_neg) / (1 eps_i) return np.dot(Jac, v)
Correct, this may need more adjustments/improvements in case you have very large and/or small y.
Hi, stil got this and it is very slow: assimulo.solvers.sundials.CVodeError: 'The solver took max internal steps but could not reach tout. At time 0.000008.'. Any suggestion how to improve? For info, Y is an array of 54x1 containing Temperature and Mass Fraction of 53 species. At the beginning of the integration almost all the mass fraction are zero expect the fuel,oxygen and nitrogen.. Thanks a lot
With that system size, the default dense solver is likely your best option.
Sure! You are right. That was only an example but I am using a more complex mechanism which contains more than 2000 species, and the "Dense" options is not so fast. This is the reason of developing the code above using the jacobian option and spgmr as linear solver. I would really appreciate if you habe ant suggestion to improve the code.
Thanks a lot in advance
Some possible approaches:
cvode_with_preconditioning.py
example.jacv
method, you only need to implement the jacobian*vector operation. With very structured systems and sufficient knowledge, this can be done without actually constructing a jacobian matrix, possibly saving a lot of computational time.Thanks a lot! I have a couple of questions hope you can help. 1) do you think preconditioning is fast than using sparse linear solver or spgmr? 2) regarding epsilon, do you have suggestions on documentation for the epsilon selection? 3) what do you mean with your last point? I am not providing any vector in my code so I don't know how to provide a better jac*vector implementation than what I wrote in the code. Apologies for my limited knowledge on the topic. Thanks in advance
jac
function. With the SPGMR solver, the linear solver does not need the actual Jacobian matrix, but only a function for the Jacobian*vector operation. This is the jacv function. However, this does not require you as a user to implement the actual Jacobian and compute the matrix vector product in this function. A common example for this are Toeplitz type matrix structure, commonly found in PDE discretizations, which lead to (very sparse) band-diagonal matrix structures. Using such techniques does require quite a lot of knowledge on your system structure though.fy
argument to jacv corresponds to f(t, y), so you can safe one f evaluation using that one instead.Hi,
thanks for your suggestions. I did some calculations comparing the linear-solver performance: "DENSE" vs "SPGMR" vs "SPGMR with Preconditioning". Here below the results: a) Dense Solver: simulation time 2.7 seconds , number of steps 100 b) SPGMR Solver: no solution. The integration was running very slow. I dont understand why i dont get any solution. c) SPGMR with Preconditioning: simulation time 480s, number of steps >10000.
Questions: 1) why dense solver options is faster than spgmr with precondiioning? 2) why spgmr with preconditioning is doing so many steps? (i have also run a case with analytical jacobian (not-shown in the picture) and i was getting similar issue: slow and too many steps) 3) why spgmr alone is not converging?
Please find attached also the numerical codes used for running the comparison.
Thanks in advance for any help.
To quote the Section on preconditioning in the CVode documentation:
"If this linear system solve is done with one of the scaled preconditioned iterative linear solvers supplied with SUNDIALS, these solvers are rarely successful if used without preconditioning; it is generally necessary to precondition the system in order to obtain acceptable efficiency."
Generally speaking (without having checked exactly what CVode does), SPGMR is an iterative linear solver and will not find the exact solution to the linear system (like LU decomposition does), but rather a sufficiently good approximation. This can affect convergence rates when solving the non-linear systems and thus slow down overall simulation time.
I do not see any obvious mistake in your codes and the fact preconditioning improves simulation time resp. gives a solution at all, certainly means you are on the right path.
However, given the current numbers, it seems questionable if you can actually obtain a speed-up using the SPGMR solver for your given problem.
Thanks for the clarification regarding the SPGMR linear solver.
Apologies, i forgot to mention that for this example y is an array of 700x1. In this case the dense option is quite fast compared to the SPGMR+Preconditioning and i dont see why and with lots of more steps required.
However, that was only an example to make a reasonable comparison, but i need to use a more complex system with more than 2000 variable. For this case the dense option will be very slow and will take approximately more than 120s to get a solution. This is the reason why i wanted to develop the code with spgmr with preconditioning to speed-up the solution.
Thanks in advance,
While 2000 variables is not small, it may very well still be below the size threshold where e.g., SPGMR would be faster, even with a better preconditioner. However, this is very much problem specific.
If you had structural information and a reasonably sparse jacobian, using sparse LU decomposition could give you considerable speed-ups though.
Not a big time saving with sparse option, considering the whole system (2000 variables). From the 120s ->75. I'll continue to investigate why spgmr + preconditioning is so slow. In case you have any other suggestions I'm happy to hear them
Don't have anything else to add for the time being, good luck with your investigation.
Hi all,
I installed using assimulo to integrate a large kinetic mechanism with cvode.
To get familirity with assimulo, I created a dummy example to couple assimulo and cantera using methane kinetic mechanism. Everything seems to work fine if I use 'DENSE' as linear solver, but when i switch to 'SPGMR' (even if I set up everything as the example cvode_with_jac_spgmr.py) the solution fails because assimulo doesnt seem to integrate the ode system correctly. In the attachment the comparison of the dense vs spgmr solver and here below my code. Am I doing anything wrong?
Thanks a lot for any comments and help,
`import numpy as np import cantera as ct from assimulo.solvers import CVode from assimulo.problem import Explicit_Problem
gasconstant = 8314.46261815324
def f(t, y): gas.set_unnormalized_mass_fractions(y[1:]) print(y[0]) gas.TD = y[0], rho1 wdot = gas.net_production_rates dYdt = wdot Wk / rho1 dTdt = - (np.dot(gas.partial_molar_enthalpies - np.ones(gas.n_species) gasconstant y[0], wdot) / (rho1 gas.cv)) yd_v = np.zeros(nsp) yd_v[0] = dTdt yd_v[1:] =dYdt return yd_v
def jacv(t, y, fy, v): eps1 = 1e-7 Jac = np.empty([nsp, nsp]) dy_perturbed_neg = f(t, y) for i in range(nsp): y_perturbed = np.copy(y) y_perturbed[i] = y_perturbed[i] + eps1 dy_perturbed_pos = f(t, y_perturbed) Jac[:, i] = (dy_perturbed_pos - dy_perturbed_neg) / (1 * eps1) return np.dot(v,Jac)
gas = ct.Solution('gri30.yaml') gas.TP = 900,50*1e5 gas.set_equivalence_ratio(1, 'CH4:1', 'O2:1,N2:3.76')
Wk = gas.molecular_weights rho1 = gas.density y0 = np.hstack((gas.T, gas.Y)) nsp =len(y0)
exp_mod = Explicit_Problem(f, y0, name='Example using the Jacobian Vector product') exp_mod.jacv = jacv # Sets the Jacobian exp_sim = CVode(exp_mod) # Create a CVode solver
exp_sim.iter = 'Newton' # Default 'FixedPoint' exp_sim.discr = 'BDF' # Default 'Adams' exp_sim.atol = 1e-10 # Default 1e-6 exp_sim.rtol = 1e-4 # Default 1e-6 exp_sim.linear_solver = 'spgmr' # Change linear solver exp_sim.options["usejac"] = True
t, y = exp_sim.simulate(1, 1000) # Simulate 5 seconds with 1000 communication points
import matplotlib matplotlib.use('Qt5Agg')
import matplotlib.pyplot as plt plt.plot(t,y[:,0],linestyle="dashed",marker="o") #Plot the solution plt.xlim(0, 0.5)
plt.ylim(800,3000) plt.show()`
assimulo_jac_spgmr.txt