Open valentinsulzer opened 1 year ago
Also just found out about https://pymoo.org/
@brosaplanella I can look into this if it is wanted?
Yes, that would be great, thanks!
It looks like nlopt
isn’t going to be a simple install. Do we want to add the extra complexity? It requires an active cmake.
Okay: Steps for making nlopt build (M series macs at least) following this SO post
source .venv/bin/activate
.which python
to see the python interpreter path for pbparam’s env python, or alternatively the python env you want to use pbparam in. Once you have confirmed the correct python interpreter set an environment variable by using PYTHON_INTERP_PATH=${which python}
cmake -DNLOPT_GUILE=OFF -DNLOPT_MATLAB=OFF -DNLOPT_OCTAVE=OFF -DNLOPT_TESTS=OFF -DPYTHON_EXECUTABLE=$PYTHON_INTERP_PATH
. You should see the logs as shown in the SO post.make
to compile.make install
to install the bindings to python.pip3 install -e ./
in the usual way. If using the pbparam package you can run pip3 install pbparam
Clearly this isn’t a workable implementation for a deployed package so we wont be merging this branch until this is fixed. Linking the bug https://github.com/DanielBok/nlopt-python/issues/13 to track if this is fixed.
Above issue implements the nlopt fix in a handy script.
That's great! It's nice to have this script.
Latest commit on this issue branch has an implementation but it isn't working yet.
I am unsure if this will work as it seems to not like the structure we have around the objective_function. I have one more idea to try tomorrow but don’t want to get into a sunk cost fallacy and put good time after bad.
This goes for Nlopt I will also have a look into pymoo
I managed to ascertain what the issue was.
Nlopt expects a grad value in the python function, strictly this is bad as it is unused but I assume it comes from some pointer thing deeper in Nlopt. It also is very very picky about the returntype of the optimisation function. In the Nlopt wrapper class I have made a function decorator (called wrapper) in the run optimiser. This makes the nlopt play nicely with our other functions by allowing the function to take as many arguments as needed then simply strips off the last one to ensure the oprimiastion problem gets the expected number of arguments and the nlopt library can still do whatever pointer magic it is doing with grad (the final argument). We also cast the result to np.float64 which is the expected return type for nlopt, numpy will usually do optimisation to use smaller return types if possible but then nlopt will hiccup.
I think that we should shelve this branch as an example of how it’s done but why we don’t do it. As between the mac install and the fragility of nlopt I think its a time bomb. This is of course unless there is a benifit to nlopt that outweighs these concerns. In the notebooks here it performs no better than scipy minimise.
Sounds good! Let's keep this branch as an example on how to do it and move on to PyMOO and see if it is better.
NLopt is an optimization library with lots of different optimizers. It has a python package on pip https://nlopt.readthedocs.io/en/latest/NLopt_Tutorial/#example-in-python
The following code works well and quickly: