IMTEK-Simulation / NuMPI

Utilities for MPI-parallel numerical calculations with Python
MIT License
2 stars 1 forks source link

scipy.optimize.minimize not compatible with MPI Parallelization #13

Closed sannant closed 5 years ago

sannant commented 5 years ago

In scipy.optimize.minimize when we provide a function that computes function and jacobian together, scipy caches the value

elif not callable(jac):
            if bool(jac):
                fun = MemoizeJac(fun)
                jac = fun.derivative
            else:
jac = None

If we look in MemoizeJac

class MemoizeJac(object):
    """ Decorator that caches the value gradient of function each time it
    is called. """
    def __init__(self, fun):
        self.fun = fun
        self.jac = None
        self.x = None

    def __call__(self, x, *args):
        self.x = numpy.asarray(x).copy()
        fg = self.fun(x, *args)
        self.jac = fg[1]
        return fg[0]

    def derivative(self, x, *args):
        if self.jac is not None and numpy.all(x == self.x):
            return self.jac
        else:
            self(x, *args)
return self.jac

We see that the jacobian is not recomputed if the local xdidn't change. But x may have changed on other processors, leading to a deadlock.

If in the implentation we call fun(x) befor each jac(x) this will not be a problem

sannant commented 5 years ago

We will not support the use of the scipy.optimize.minimize interface on multiple processors.