Closed TomTranter closed 3 years ago
@TomTranter, it would be cool to make something like this a supported feature in core pybamm, for parameter sweeps.
I'm thinking we could pass in a list of input dictionaries to the solver, and a number of cores to use (n_cores=-1
to use all available cores), then the solver automatically calls self._integrate
, looping over all the input dictionaries in the list in parallel using n_cores
, and returns a list of solutions
Yes happy to discuss on Monday and work on it next week
Just having a look into this, presumably the model should be setup and discontinuities calculated for each input dictionaries as well?
Model setup should be independent of the input values (i.e. you can reuse a setup model even if the input values change). Not sure about the discontinuities, possible that they would need to be recomputed if input parameters appear in the timescale or in the values specified by the discontinuities.
base_solver.set_up
takes the inputs as a keyword argument though? So I suppose in some cases the setup step depends on the inputs? The inputs
dict is used in the set_up
method when calculating initial conditions
# Process initial conditions
initial_conditions = process(
model.concatenated_initial_conditions,
"initial_conditions",
use_jacobian=False,
)[0]
init_eval = InitialConditions(initial_conditions, model)
# ....
model.y0 = init_eval(inputs)
But I'm having a difficult time understanding if the return value of init_eval
does indeed depend on the content of inputs
.
This inputs
dict is also passed to Symbol.evaluate
when evaluating time and length scales, but since these are scalars this has no effect on the return value.
init_eval
is a function which takes the inputs and returns the initial conditions given those inputs. So it should depend on the content of inputs in general, but only the inputs that appear in the initial conditions (if any).
It only gets created once, in setup
, and then gets called in _set_initial_conditions
whenever the model is solved.
I think the line model.y0 = init_eval(inputs)
in setup
could possibly be deleted, since the same line appears in _set_initial_conditions
but I'm not sure.
There is also some complexity introduced by the fact that you can give empty inputs to model.solve
when using a casadi solver, in which case it will return a ProcessedSymbolicVariable
which can be evaluated with any inputs
Can this be closed following #1261 ?
I think yes as the functionality described by @TomTranter was integrated to PyBaMM in #1261 ?
Summary Doing many 1D models in parallel is a good way to solve my ECM problem and also parametric sweeps.
Motivation The problem when you have hundreds or thousands of tasks becomes sending the data back and forth, especially if you step the solutions a little bit and make adjustments on the fly as I am doing. I think this can be addressed by using shared memory. I found a package that just works on linux (or maybe POSIX systems) which seems to give great performance.
The below script demonstrates stepping 1300 SPMs with slightly different applied currents using shared arrays and a processpoolexecutor. There is also a line commented out that evaluates the terminal voltage. This slows things down a lot and highlights that evals are in need of vectorization
Additional context