joshspeagle / dynesty

Dynamic Nested Sampling package for computing Bayesian posteriors and evidences
https://dynesty.readthedocs.io/
MIT License
357 stars 77 forks source link

pool failing #124

Closed grburgess closed 5 years ago

grburgess commented 5 years ago

I'm getting the following error when trying to use and MPI pool

[0:apply]: TypeError: cinit() takes exactly 1 positional argument (0 given) [1:apply]: TypeError: cinit() takes exactly 1 positional argument (0 given) [2:apply]: TypeError: cinit() takes exactly 1 positional argument (0 given) [3:apply]: TypeError: cinit() takes exactly 1 positional argument (0 given)

It could be because my prior and likelihoods are defined as callbacks within a class.


    def _construct_dynesty_posterior(self):
        """
        Construct the likelihood and prior for dynesty.

        for info see: https://dynesty.readthedocs.io/en/latest/crashcourse.html

        """

        # First update the free parameters (in case the user changed them after the construction of the class)
        self._update_free_parameters()

        def loglike(trial_values):

            # NOTE: the _log_like function DOES NOT assign trial_values to the parameters

            for i, parameter in enumerate(self._free_parameters.values()):
                parameter.value = trial_values[i]

            log_like = self._log_like(trial_values)

            if self.verbose:
                n_par = len(self._free_parameters)

                print("Trial values %s gave a log_like of %s" % (map(lambda i: "%.2g" % trial_values[i], range(n_par)),
                                                                 log_like))

            return log_like

        # Now construct the prior
        # dynesty priors are defined on the unit cube
        # and should return the value in the bounds... not the
        # probability. Therefore, we must make some transforms

        def prior(uparams):

            params = np.empty_like(uparams)

            for i, (parameter_name, parameter) in enumerate(self._free_parameters.iteritems()):

                try:

                    # get the param in real space from unit space
                    params[i] = parameter.prior.from_unit_cube(uparams[i])

                except AttributeError:

                    raise RuntimeError("The prior you are trying to use for parameter %s is "
                                       "not compatible with dynesty" % parameter_name)
            return params

        n_dim = len(self._free_parameters)

        _ = prior([0.5] * n_dim)

        return loglike, prior

Is this expected?

joshspeagle commented 5 years ago

I think so. The parallelization scheme is extremely simplistic (see here and here), where I just swap in pool.map instead of the typical map when making function calls. Since this involves some bundling of methods, it's possible it's just failing to register these internal class functions properly.

grburgess commented 5 years ago

Hi, I believe you are correct :). I was able to fix it (mostly) with informing the model better able the class structure. Thanks!