paramm-team / pybamm-param

Parameter optimisation for PyBaMM.
BSD 3-Clause "New" or "Revised" License
37 stars 8 forks source link

Add NLopt as an optimizer #30

Open valentinsulzer opened 1 year ago

valentinsulzer commented 1 year ago

NLopt is an optimization library with lots of different optimizers. It has a python package on pip https://nlopt.readthedocs.io/en/latest/NLopt_Tutorial/#example-in-python

The following code works well and quickly:

import nlopt
import pybamm

model = pybamm.lithium_ion.SPM()
parameter_values = pybamm.ParameterValues("Chen2020")
parameter_values.update(
    {"Current function [A]": pybamm.InputParameter("I")}
)
sim = pybamm.Simulation(model, parameter_values=parameter_values)
sol = sim.solve([0, 3600], inputs={"I": 5})["Terminal voltage [V]"].data

def myfunc(x, grad):
    output = 2.5 * np.ones(100)
    new_sol = sim.solve([0, 3600], inputs={"I": x[0]})["Terminal voltage [V]"].data
    output[:len(new_sol)] = new_sol
    print(x)
    return sum((output - sol) ** 2)

opt = nlopt.opt(nlopt.LN_BOBYQA, 1)
opt.set_min_objective(myfunc)
opt.set_xtol_rel(1e-4)
x = opt.optimize([4])
minf = opt.last_optimum_value()
print("optimum at ", x[0])
print("minimum value = ", minf)
print("result code = ", opt.last_optimize_result())
valentinsulzer commented 1 year ago

Also just found out about https://pymoo.org/

PipGrylls commented 10 months ago

@brosaplanella I can look into this if it is wanted?

brosaplanella commented 10 months ago

Yes, that would be great, thanks!

PipGrylls commented 10 months ago

It looks like nlopt isn’t going to be a simple install. Do we want to add the extra complexity? It requires an active cmake.

PipGrylls commented 10 months ago

Okay: Steps for making nlopt build (M series macs at least) following this SO post

  1. Download and unpack the latest nlopt from the homepage, do this in a folder somewhere you can run sudo commands.
  2. In a terminal navigate to the pbparam project folder, or the project in which you are using PB param.
  3. Activate the environment in pbparam (or other env), usually source .venv/bin/activate.
  4. Use which python to see the python interpreter path for pbparam’s env python, or alternatively the python env you want to use pbparam in. Once you have confirmed the correct python interpreter set an environment variable by using PYTHON_INTERP_PATH=${which python}
  5. Navigate to the unpacked nlopt source and run cmake -DNLOPT_GUILE=OFF -DNLOPT_MATLAB=OFF -DNLOPT_OCTAVE=OFF -DNLOPT_TESTS=OFF -DPYTHON_EXECUTABLE=$PYTHON_INTERP_PATH. You should see the logs as shown in the SO post.
  6. Run make to compile.
  7. Run make install to install the bindings to python.
  8. If working on pbparam development you can now run pip3 install -e ./ in the usual way. If using the pbparam package you can run pip3 install pbparam
PipGrylls commented 10 months ago

Clearly this isn’t a workable implementation for a deployed package so we wont be merging this branch until this is fixed. Linking the bug https://github.com/DanielBok/nlopt-python/issues/13 to track if this is fixed.

PipGrylls commented 10 months ago

Above issue implements the nlopt fix in a handy script.

muhammedsogut commented 10 months ago

That's great! It's nice to have this script.

PipGrylls commented 10 months ago

Latest commit on this issue branch has an implementation but it isn't working yet.

PipGrylls commented 10 months ago

I am unsure if this will work as it seems to not like the structure we have around the objective_function. I have one more idea to try tomorrow but don’t want to get into a sunk cost fallacy and put good time after bad.

This goes for Nlopt I will also have a look into pymoo

PipGrylls commented 10 months ago

I managed to ascertain what the issue was.

Nlopt expects a grad value in the python function, strictly this is bad as it is unused but I assume it comes from some pointer thing deeper in Nlopt. It also is very very picky about the returntype of the optimisation function. In the Nlopt wrapper class I have made a function decorator (called wrapper) in the run optimiser. This makes the nlopt play nicely with our other functions by allowing the function to take as many arguments as needed then simply strips off the last one to ensure the oprimiastion problem gets the expected number of arguments and the nlopt library can still do whatever pointer magic it is doing with grad (the final argument). We also cast the result to np.float64 which is the expected return type for nlopt, numpy will usually do optimisation to use smaller return types if possible but then nlopt will hiccup.

I think that we should shelve this branch as an example of how it’s done but why we don’t do it. As between the mac install and the fragility of nlopt I think its a time bomb. This is of course unless there is a benifit to nlopt that outweighs these concerns. In the notebooks here it performs no better than scipy minimise.

brosaplanella commented 10 months ago

Sounds good! Let's keep this branch as an example on how to do it and move on to PyMOO and see if it is better.