keflavich / imf

Simple tools to work with the Initial Mass Function
MIT License
44 stars 17 forks source link

Is the Chabrier mass function still defined incorrectly? No! #32

Open keflavich opened 2 years ago

keflavich commented 2 years ago

@segasai , and @richardson-t this might be an assignment for you:

In trying to track down details about how the Chabrier IMF is defined, I realized that we're defining it as a lognormal - but we're defining:

image

instead of

image

which means our definition is probably incorrect by

image

These actually don't differ from Kroupa that much, but more than I think is comfortable for some use cases.

I had the thought that we could define this 1/x * lognormal using scipy.stats as such:

class OneOverXLogNormal_gen(scipy.stats._continuous_distns.rv_continuous):
    def _pdf(self, x, s):
        return np.exp(-(np.log(x)-self.a)**2/(2*s**2)) / x

class TruncatedOneOverXLogNormal(TruncatedLogNormal):
    def __init__(self, mu, sig, m1, m2):
        super().__init__(mu, sig, m1, m2, distr=OneOverXLogNormal_gen())

but that doesn't work quite as I'd hope. I'm still working out the math but I want to post this now before I board the next flight... consider this a WIP

class TruncatedOneOverXLogNormal:
    def __init__(self, mu, sig, m1, m2):
        """ 1/x lognormal """
        self.m1 = m1
        self.m2 = m2
        self.norm = self.cdf(self.m2) - self.cdf(self.m1)

    def pdf(self, x):
        return np.exp(-(np.log(x)-self.a)**2/(2*s**2)) / x * (x >= self.m1) * (x <= self.m2) / self.norm

    def cdf(self, x):
        # todo: check normalization...
        def _cdf(x):
            return np.exp(-(np.log(x)-self.a)**2/(2*s**2))
        return (_cdf(np.clip(x, self.m1, self.m2)) -
                _cdf(self.m1)) / self.norm

    def rvs(self, N):
        x = np.random.uniform(0, 1, size=N)
        return self.ppf(x)

    def ppf(self, x0):
        x = np.asarray(x0)
        cut1 = _cdf(self.m1)
        cut2 = _cdf(self.m2)
        # todo: calc the PPF here
        ret = self.d.ppf(x * (cut2 - cut1) + cut1)
        ret = np.asarray(ret)
        ret[(x < 0) | (x > 1)] = np.nan
        return ret
segasai commented 2 years ago

You are incorrect, I believe. And the code in imf is accurate. The lognormal is defined as image and the 1/m disappears if you make it P(logm). When I was writing the new framework for distributions I always tried to ensure that the distributions are always P(m), i.e probability distribution of mass not of any other function of mass to avoid any confusion.

keflavich commented 2 years ago

Nevermind, everything I said here is wrong. We defined it correctly b/c scipy's lognormal includes the leading 1/x.

dndm = np.log(10) / masses * np.exp(-(np.log10(masses) - np.log10(0.079))**2 / (2*0.69**2))
pl.loglog(masses,- (dndm * 0.158
                   - scipy.stats.lognorm(s=np.log(10)*(0.69), scale=(0.079)).pdf(masses)*np.log(10)**0.5)/dndm)
pl.legend(loc='best')

There are some arbitrary scaling factors in there but it looks like everything we had before was fine and I was chasing red herrings.

keflavich commented 2 years ago

Sorry about the errors @segasai , you did right.

I got misled when trying to reproduce this, fig 24 of Kroupa: image

but I still come up with:

image

and I'm not yet clear where the discrepancies come from (the match in Kroupa's plot is closer)

segasai commented 2 years ago

From what I can see, the parameters of the chabrier used for fig24 are different from Chab2003/2005 (see top of page 92) and also the pdfs are renormalized to the same integral above M=0.07 I don't know if that's enough to fix it.

keflavich commented 2 years ago

the renormalization to match BDs is probably part of it, that's a good call. I'll look at this more - I don't think it's anything worrying but I really prefer to be able to exactly reproduce paper figures if possible.