esa / pygmo2

A Python platform to perform parallel computations of optimisation tasks (global and local) via the asynchronous generalized island model.
https://esa.github.io/pygmo2/
Mozilla Public License 2.0
422 stars 57 forks source link

[New user question] Using autograd for derivatives #93

Closed a-jp closed 2 years ago

a-jp commented 2 years ago

Hi,

When using the Scipy SLSQP directly I use the autograd python package to compute derivatives. When using SLSQP through pygmo or indeed ipopt through pygmo, is it possible to use autograd to compute the derivatives rather than the numerical differentiation provided? If so would it be possible to have a small working example showing how to do that?

Many Thanks, Andy

a-jp commented 2 years ago

Would anyone be able to help with this?

darioizzo commented 2 years ago

Hey, yes, you can pass gradients and hessian for the algorithms that make use of them. Just add the necessary methods to your udp, its that simple.

For example have a look at the code here: https://esa.github.io/pygmo2/tutorials/nlopt_basics.html

In particular at the section "I do not have the gradient".

I know its confusing as you do have the gradient, but bear with me. In that example:

    def gradient(self, x):
        return pg.estimate_gradient_h(lambda x: self.fitness(x), x)

we have added to the UDP a method called gradient. And we return a numerical estimate.

If you do have the gradient (for example using autodiff functionalities), you can just return it there!