pybamm-team / PyBaMM

Fast and flexible physics-based battery models in Python
https://www.pybamm.org/
BSD 3-Clause "New" or "Revised" License
1.12k stars 547 forks source link

Casadi vertcat is slow if using larger number of time steps #1996

Open valentinsulzer opened 2 years ago

valentinsulzer commented 2 years ago

If using many time steps, it doesn't take too long to integrate the model, but it takes a long time to create the solution:

import pybamm
import numpy as np

pybamm.set_logging_level("INFO")

model = pybamm.lithium_ion.DFN() # options={"particle": "quadratic profile"})

param = model.default_parameter_values
# param["Negative electrode diffusivity [m2.s-1]"] = pybamm.InputParameter("Ds_n")
# param["Positive electrode diffusivity [m2.s-1]"] = pybamm.InputParameter("Ds_p")
# param["Electrolyte diffusivity [m2.s-1]"] = pybamm.InputParameter("De")

param["Current function [A]"] = param['Nominal cell capacity [A.h]']/10

solver = pybamm.CasadiSolver("fast")
simulation = pybamm.Simulation(model, parameter_values=param, solver=solver)

t_max = 60*60*10
t_eval = np.linspace(0, t_max, num=(t_max//2))

simulation.solve(t_eval)

Timings are as follows:

Set-up time: 71.993 ms, Solve time: 2.827 s (of which integration time: 127.908 ms), Total time: 2.899 s

Most of the additional solve time is being spent in this vertcat https://github.com/pybamm-team/PyBaMM/blob/23157aebce218444edc83b525dfb2c7fc8637598/pybamm/solvers/casadi_solver.py#L697

It's strange because the solution arrays are being created by the solver really fast, but then just vertcat takes a long time. Maybe there is a faster way to do vertcat?

dion-w commented 2 years ago

From the casadi documentation i found: Concatenation means stacking matrices horizontally or vertically. Due to the column-major way of storing elements in CasADi, it is most efficient to stack matrices horizontally. Also someone in this forum gave a suggestion in his second point about a possible acceleration instead of using vercat. I have no experience with casadi whatsoever.

valentinsulzer commented 2 years ago

@dion-w I don't really understand the solutions above ...

One idea I had is to create an object pybamm.no_memory_vertcat(a,b) which stores the two vectors and then can be indexed as needed, taking the relevant index from a or b (shifted) as needed

rtimms commented 19 hours ago

@MarcBerliner is this still a bottleneck? not sure if your recent changes cover this? or if it will be fixed when we switch to IDAKLU as the default

MarcBerliner commented 10 hours ago

is this still a bottleneck? not sure if your recent changes cover this? or if it will be fixed when we switch to IDAKLU as the default

@rtimms the timings still look about the same for the CasadiSolver, but this is not an issue with the IDAKLUSolver since we store everything directly as numpy arrays, which handle concatenation better. Here's the IDAKLUSolver timings:

Set-up time: 43.910 ms, Solve time: 126.159 ms (of which integration time: 125.623 ms), Total time: 170.069 ms

Updated code for timing IDAKLUSolver:

solver = pybamm.IDAKLUSolver()
simulation = pybamm.Simulation(model, parameter_values=param, solver=solver)

t_max = 60*60*10
t_eval = np.linspace(0, t_max, num=(t_max//2))

simulation.solve([0, t_max], t_interp=t_eval)
rtimms commented 5 hours ago

great, thanks! i’ll leave this open for now but not a priority since we will eventually drop the casadi solver anyway

MarcBerliner commented 3 hours ago

great, thanks! i’ll leave this open for now but not a priority since we will eventually drop the casadi solver anyway

Sure. I don't think it's a difficult fix if we ever want/need to patch this.