python / cpython

The Python programming language
https://www.python.org/
Other
60.02k stars 29.05k forks source link

Add gamma function, error functions and other C99 math.h functions to math module #47616

Closed terryjreedy closed 14 years ago

terryjreedy commented 15 years ago
BPO 3366
Nosy @tim-one, @rhettinger, @terryjreedy, @mdickinson, @ned-deily, @stevendaprano
Files
  • math_private.h: header for erf, erfc, lgamma and tgamma
  • pymath.c.diff
  • mathmodule.c.diff
  • pymath.h.diff
  • test_math.py.diff
  • math.rst.diff
  • mathmodule.diff: patch for erf, erfc, lgamma and gamma
  • pymath.c.diff: erf and gamma patches without math_private.h
  • gamma.patch
  • gamma3.patch: Implementation of math.gamma
  • gamma4.patch
  • gamma5.patch
  • lgamma1.patch
  • gamma-failures.txt
  • pymath.h.diff: Euler constant
  • mathmodule.c.diff: expm1
  • erf.py: error function
  • mathmodule.c.diff
  • erf.patch
  • erf_c.patch: Add erf and erfc functions to math module.
  • expm1.patch
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields: ```python assignee = 'https://github.com/mdickinson' closed_at = created_at = labels = ['type-feature', 'library'] title = 'Add gamma function, error functions and other C99 math.h functions to math module' updated_at = user = 'https://github.com/terryjreedy' ``` bugs.python.org fields: ```python activity = actor = 'mark.dickinson' assignee = 'mark.dickinson' closed = True closed_date = closer = 'mark.dickinson' components = ['Library (Lib)'] creation = creator = 'terry.reedy' dependencies = [] files = ['10955', '10973', '10974', '10975', '10976', '10977', '10982', '11005', '14844', '14927', '14940', '14943', '15008', '15152', '15488', '15489', '15490', '15491', '15539', '15540', '15574'] hgrepos = [] issue_num = 3366 keywords = ['patch', 'needs review'] message_count = 54.0 messages = ['69698', '70106', '70108', '70120', '70125', '70128', '70131', '70232', '70233', '70241', '70262', '70266', '70267', '70270', '70361', '70412', '77075', '92294', '92844', '92928', '92935', '92936', '92939', '92944', '92948', '93228', '93370', '93373', '94165', '94166', '94168', '94169', '94170', '94171', '94172', '94175', '94224', '96108', '96109', '96110', '96111', '96266', '96267', '96268', '96332', '96333', '96485', '96497', '96599', '96600', '97127', '97131', '97133', '97160'] nosy_count = 8.0 nosy_names = ['tim.peters', 'rhettinger', 'terry.reedy', 'mark.dickinson', 'ned.deily', 'stutzbach', 'nirinA', 'steven.daprano'] pr_nums = [] priority = 'normal' resolution = None stage = 'commit review' status = 'closed' superseder = None type = 'enhancement' url = 'https://bugs.python.org/issue3366' versions = ['Python 2.7', 'Python 3.2'] ```

    terryjreedy commented 15 years ago

    From Py3000 list: in C99 standard math library, but need code when not yet in particular implementation.
    Dickinson: open and assign to me Stutzbach: I suggest using the versions from newlib's libm. They contain extensive comments explaining the math and have a generous license, e.g.,: http://sourceware.org/cgi-bin/cvsweb.cgi/~checkout~/src/newlib/libm/math/s_erf.c?rev=1.1.1.1&cvsroot=src

    5b87aa25-ea37-4f90-a8a2-e73363e1bc0a commented 15 years ago

    here is an attempt to make erf, erfc, lgamma and tgamma accessible from math module. erf and erfc are crude translation of the code pointed out by Daniel Stutbach. lgamma is also taken from sourceware.org, but doesn't use the reentrant call for the sign.

    i tested only on gcc-4.3.1/linux-2.6.26. i'll write a testsuite soon.

    some results:

    Python 3.0b2 (r30b2:65080, Jul 21 2008, 13:13:13) [GCC 4.3.1] on linux2

    >>> print(math.tgamma(0.5))
    1.77245385091
    >>> print(math.sqrt(math.pi))
    1.77245385091
    >>> print(math.tgamma(1.5))
    0.886226925453
    >>> print(math.sqrt(math.pi)/2)
    0.886226925453
    >>> print(math.sqrt(math.pi)*3/4)
    1.32934038818
    >>> print(math.tgamma(2.5))
    1.32934038818
    >>> for i in range(14):
        print(i, math.lgamma(i), math.tgamma(i))
    
    0 inf inf
    1 0.0 1.0
    2 0.0 1.0
    3 0.69314718056 2.0
    4 1.79175946923 6.0
    5 3.17805383035 24.0
    6 4.78749174278 120.0
    7 6.57925121201 720.0
    8 8.52516136107 5040.0
    9 10.6046029027 40320.0
    10 12.8018274801 362880.0
    11 15.1044125731 3628800.0
    12 17.5023078459 39916800.0
    13 19.9872144957 479001600.0
    
    >>> for i in range(-14,0):
        print(i/5, math.lgamma(i/5), math.tgamma(i/5))
    
    -2.8 0.129801291662 -1.13860211111
    -2.6 -0.118011632805 -0.888685714647
    -2.4 0.102583615968 -1.10802994703
    -2.2 0.790718673676 -2.20498051842
    -2.0 inf inf
    -1.8 1.15942070884 3.18808591111
    -1.6 0.837499812222 2.31058285808
    -1.4 0.978052353322 2.65927187288
    -1.2 1.57917603404 4.85095714052
    -1.0 inf -inf
    -0.8 1.74720737374 -5.73855464
    -0.6 1.30750344147 -3.69693257293
    -0.4 1.31452458994 -3.72298062203
    -0.2 1.76149759083 -5.82114856863
    
    >>> math.erf(INF)
    1.0
    >>> math.erf(-INF)
    -1.0
    >>> math.erfc(INF)
    0.0
    >>> math.erfc(-INF)
    2.0
    >>> math.erf(0.6174)
    0.61741074841139409
    5b87aa25-ea37-4f90-a8a2-e73363e1bc0a commented 15 years ago

    and this is the header. it is stolen from glibc.

    mdickinson commented 15 years ago

    Thanks for the patch! I probably won't get to this properly until after 2.6 final, but it won't get lost. It seems like there's pretty good support for adding these functions.

    By the way, there's already a setup in Python 2.6 (ad 3.0) for substitutes for missing math functions: take a look at the file Python/pymath.c, which provides inverse hyperbolic functions for those platforms that need them, as well as the bits in configure.in that check for these functions.
    This is the really the right place for the tgamma/lgamma/erf/erfc code.

    So a full patch for this should touch at least Python/pymath.c, Modules/mathmodule.c, configure.in, Lib/test/test_math.py, and Doc/Library/math.rst. If you have time to fill in any of these pieces, it would be greatly appreciated.

    rhettinger commented 15 years ago

    Since we're not in a hurry for Py2.7 and 3.1, I would like to this kicked around a bit on the newsgroup and in numpy forums (personally, I would also post a pure python equivalent to the ASPN cookbook for further commentary).

    There are several different approximations to choose from. Each of them has their own implications for speed and accuracy. IIRC, the one I used in test.test_random.gamma() was accompanied by a proof that its maximum error never exceeded a certain amount. I think there were some formulas that made guarantees only over a certain interval and others that had nice properties in the first and second derivatives (one that don't have those properties can throw newtonian solvers wildly off the mark).

    Let's let the community use its collective wisdom to select the best approach and not immediately commit ourselves to the one in this patch.

    At one point, Tim was reluctant to add any of these functions because it is non-trivial to do well and because it would open a can of worms about why python gets a different answer (possibly not as close to true) and some other favorite tool (matlab, excel, numpy, and engineering calculator, etc).

    FWIW, I changed the 2.6 version of test_random.gamma() to take advantage of msum() to increase its accuracy when some of the summands have nearly cancelling values.

    cfc9f3e0-e33f-4ecd-9ddd-4123842d6c1e commented 15 years ago

    On Mon, Jul 21, 2008 at 5:34 PM, Raymond Hettinger \report@bugs.python.org\ wrote:

    There are several different approximations to choose from. Each of them has their own implications for speed and accuracy.

    FWIW, the current patch dynamically selects one of a few different approximation methods based on the input value.

    +1 on kicking it around and comparing the output with that from other mathematics tools.

    rhettinger commented 15 years ago

    It would be nice if we knew the error bounds for each of the approximation methods. Do we know how the coefficients were generated?

    When switching from one method to another, it might be nice to have a range where the results slowly blend from one method to another: if x \< a: return f1(x) if x > b: return f2(x) t = (x - a) / (b - a) return (1-t)f1(x) + t\f2(x) This minimizes discontinuities in the first and second derivatives.

    The lowword/highword macros look to be tightly tied to the internal processor representation for floats. It would be more portable and maintainable to replace that bit twiddling with something based on frexp ().

    5b87aa25-ea37-4f90-a8a2-e73363e1bc0a commented 15 years ago

    Mark Dickinson :

    So a full patch for this should touch at least Python/pymath.c, Modules/mathmodule.c, configure.in, Lib/test/test_math.py, and Doc/Library/math.rst. here are patches for Python/pymath.c, Modules/mathmodule.c, Lib/test/test_math.py, Doc/Library/math.rst and also Include/pymath.h. i haven't touched configure.in as i've erf, erfc, lgamma and tgamma on my platform, so the patches can be tested. the doc may need better wording and test_math.py some additionnal tests.

    Raymond Hettinger:

    It would be nice if we knew the error bounds for each of the approximation methods. Do we know how the coefficients were generated? it will be really great if we're able to regenerate all these coefficients. this may be done by following the comments in the original codes, but not that easy and not sure one will obtain the same things.

    The lowword/highword macros look to be tightly tied to the internal processor representation for floats. It would be more portable and maintainable to replace that bit twiddling with something based on frexp it seems possible to avoid the use of these macros. i'll try to find time to dig up these possibilities.

    rhettinger commented 15 years ago

    Why isn't tgamma() simply named gamma()? The t prefix does nothing for me except raise questions.

    cfc9f3e0-e33f-4ecd-9ddd-4123842d6c1e commented 15 years ago

    I think he just carried the names over from C, where:

    tgamma() is the "true gamma function" lgamma() is the log of the gamma function

    and gamma() might be tgamma() or lgamma() depending on which C library you use (sigh).

    I'm +1 on making gamma() be the true gamma function and not carrying over this brain-damage to Python.

    mdickinson commented 15 years ago

    I'm +1 on making gamma() be the true gamma function and not carrying over this brain-damage to Python.

    +1 from me, too.

    5b87aa25-ea37-4f90-a8a2-e73363e1bc0a commented 15 years ago

    ouch! i was asleep, at 3AM, when i edited the issue and didn't really know what i was doing. i just see that i removed, instead of edited the mathmodule.diff and didn't check after. so here it is again, with the url for original code added.

    if you want to test the erf, erfc, lgamma and gamma please use this mathmodule.diff and the math_private.h header

    the other diffs are incomplete and don't work, unless you have erfs and gammas functions implemented in your platform. i'll try to work on these files this weekend and will rename tgamma to gamma.

    rhettinger commented 15 years ago

    Can you also implement blending of approximations: (1-t)f1(x) + t\f2 (x)

    cfc9f3e0-e33f-4ecd-9ddd-4123842d6c1e commented 15 years ago

    On Fri, Jul 25, 2008 at 1:37 PM, Raymond Hettinger \report@bugs.python.org\wrote:

    Raymond Hettinger \rhettinger@users.sourceforge.net\ added the comment:

    Can you also implement blending of approximations: (1-t)f1(x) + t\f2 (x)

    Is this necessary? Are the approximations significantly different near the transition points?

    (Haven't had a chance to download the patch and try it myself as a I'm getting on a plane in the morning--sorry)

    5b87aa25-ea37-4f90-a8a2-e73363e1bc0a commented 15 years ago

    i wrote pure Python equivalent of the patch and will post it to the ASPN cookbook as suggested. i'm wondering if i have to upload the code here too, or just the link will suffice?

    Raymond Hettinger:

    Can you also implement blending of approximations: (1-t)f1(x) + t\f2(x)

    i do this with the Python code for now, and my question is: how to choose the values for the borders?

    with erf, for example, one has typically 5 domains of approximation and one can write something like:

        def erf(x):
            f = float(x)
            if (f == inf):
                return 1.0
            elif (f == -inf):
                return -1.0
            elif (f == nan):
                return nan
            else:
                if (abs(x)<0.84375):
                    return erf1(x)
                elif (0.84375<=abs(x)<1.25):
                    return erf2(x)
                elif (1.25<=abs(x)<2.857142):
                    return erf3(x)
                elif (2.857142<=abs(x)<6):
                    return erf4(x)
                elif (abs(x)>=6):
                    return erf5(x)

    implementing the blending of approximations, one may write:

        def blend(x, a, b, f1, f2):
            if x < a:
                return f1(x)
            if x > b:
                return f2(x)
            return (((b-x)*f1(x))-((a-x)*f2(x)))/(b-a)

    and the definition becomes:

        def erf(x):
        f=float(x)
        if (abs(x)<0.83):
            return erf1(x)
        elif (0.83<=abs(x)<0.85):
            return blend(x,0.83,0.85,erf1,erf2)
        elif (0.85<=abs(x)<1.24):
            return erf2(x)
        elif (1.24<=abs(x)<1.26):
            return blend(x,1.24,1.26,erf2,erf3)
        elif (1.26<=abs(x)<2.84):
            return erf3(x)
        elif (2.84<=abs(x)<2.86):
            return blend(x,2.84,2.86,erf3,erf4)
        elif (2.86<=abs(x)<5.99):
            return erf4(x)
        elif (5.99<=abs(x)<6.01):
            return blend(x,5.99,6.01,erf4,erf5)
        elif (abs(x)>=6.01):
            return erf5(x)

    but the choice of these values is somewhat arbitrary. or i completely misunderstood what you mean by "blending of approximations"!

    for erf functions, blending of approximations, as i understand it, may be implemented easily as the functions are monotonics. for gammas, this will be a bit complicated, especially for negative values, where there are extremums for half integers and discontinuities for integers. in the other hand, i'm wondering if these coefficients had been chosen with optimization of discontinuities already taken into account.

    5b87aa25-ea37-4f90-a8a2-e73363e1bc0a commented 15 years ago

    pure Python codes are posted to ASPN:

    http://code.activestate.com/recipes/576391/ http://code.activestate.com/recipes/576393/

    i also rewrote the patch so that they don't use the high_word/low_word macros anymore.

    mdickinson commented 15 years ago

    There have been requests to add other C99 libm functions to Python: expm1 is requested in bpo-3501, and log2 came up in the bpo-3724 discussion. I'm absorbing those issues into this one.

    With these two, and the gamma and error functions, we're pretty close to implementing all the functions in C99s math.h. Seems to me that we might as well aim for completeness and implement the whole lot.

    mdickinson commented 14 years ago

    Finally got around to looking at this properly.

    Here's a first attempt at a patch to add gamma, lgamma, erf and erfc to the math module. It uses the pymath.c.diff code from nirinA, and adds docs, tests, and the extra infrastructure needed for the build system.

    The code has gone in a new file called Modules/math_cextra.c. The old approach of sticking things in pymath.c really isn't right, since that ends up putting the extra code into the core Python executable instead of the math module.

    There are currently many test failures; some of these are undoubtedly due to the tests being too strict, but some of them show problems with the code for these functions. It should be possible to have erf and erfc produce results accurate to within a few ulps (\<50, say) over the entire extended real line; getting gamma and lgamma accurate for positive inputs should also be perfectly possible. Negative arguments seem to be more difficult to get right.

    To test the code on non-Windows, build with something like:

    CC='gcc -DTEST_GAMMA' ./configure && make

    the '-DTEST_GAMMA' just forces the build to use nirinA's code; without this, the math module will use the libm functions if they're available.

    This is just work-in-progress; apart from the test failures, there's still a lot of polishing to do before this can go in.

    The patch is against the trunk.

    mdickinson commented 14 years ago

    Here's a more careful patch for just the gamma function. It's a fairly direct implementation of Lanczos' formula, with the choice of parameters (N=13, g=6.024680040776729583740234375) taken from the Boost library. In testing of random values and known difficult values, it's consistently giving errors of \< 5ulps across its domain. This is considerably better than the error from the exp(lgamma(x)) method, since the exp call alone can easily introduce errors of hundreds of ulps for larger values of x.

    I'd appreciate any feedback, especially if anyone has the chance to test on Windows: I'd like to know whether I got the build modifications right.

    This patch *doesn't* use the system gamma function, even when available.
    At least on OS X 10.5.8, the supplied gamma function has pretty terrible accuracy, and some severe numerical problems near negative integers. I don't imagine Linux is much better.

    Once this is working, I'll concentrate on lgamma. Then erf and erfc.

    mdickinson commented 14 years ago

    New patch for gamma , with some tweaks:

    On my machine, testing with approx. 10**7 random samples, this version achieves an accuracy of \<= 10 ulps across the domain (comparing with correctly-rounded results generated by MPFR). Limiting the test to arguments in the range (-256.0, 1/256.0] + [1/256.0, 256.0) (with each float in that range equally likely), the error in ulps from 10**6 samples has mean -0.104 and standard deviation 1.230.

    I plan to check this in in a week or two. Feedback welcome! It would be especially useful to know whether this patch compiles correctly on Windows.

    cfc9f3e0-e33f-4ecd-9ddd-4123842d6c1e commented 14 years ago

    I'll test your patch on Windows. Are you working against the trunk or the py3k branch?

    mdickinson commented 14 years ago

    Thanks! The patch is against the trunk. (It doesn't quite apply cleanly to py3k, but the changes needed to make it do so should be minimal.)

    Hmm. Rereading my previous comment, I seem to have a blindness for negative signs:

    gamma(x) = -pi/sinpi(x)/x/gamma(x)

    should have been

    gamma(-x) = -pi/sinpi(x)/x/gamma(x)

    and

    (-256.0, 1/256.0] + [1/256.0, 256.0)

    should have been

    (-256.0, -1/256.0] + [1/256.0, 256.0)
    cfc9f3e0-e33f-4ecd-9ddd-4123842d6c1e commented 14 years ago

    I'm only setup to test the official Windows build setup under PCbuild. I'm not testing the legacy build setups under PC/VC6, PC/VS7.1, or PC/VS8.0.

    The patch against the trunk failed for PC/VC6/pythoncore.dsp. I don't need that to build, though.

    Your patch built fine, with one warning about a loss of precision on the following line in sinpi():

        n = round(2.0*y);

    I recommend explicitly casting the result of round() to int, to get rid of the warning and because explicit is better than implicit. :)

    I ran the following tests, all of which passed:

    PCbuild/python.exe Lib/test/regrtest.py -v test_math

    I appreciate your work on this patch. Let me know if there's anything else I can do.

    mdickinson commented 14 years ago

    Many thanks, Daniel.

    The patch against the trunk failed for PC/VC6/pythoncore.dsp. I don't need that to build, though.

    I've no idea why that would happen. A line-ending issue, perhaps? If it doesn't stop me committing the change, then perhaps it's not a problem.

    Your patch built fine, with one warning about a loss of precision on the following line in sinpi():

    n = round(2.0*y);

    I recommend explicitly casting the result of round() to int, to get rid of the warning and because explicit is better than implicit. :)

    Done. Thanks!

    mdickinson commented 14 years ago

    gamma5.patch. very minor changes from the last patch:

    mdickinson commented 14 years ago

    Gamma function added in r75117 (trunk), r75119 (py3k). I'm working on lgamma.

    mdickinson commented 14 years ago

    A first attempt for lgamma, using Lanczos' formula.

    In general, this gives very good accuracy. As the tests show, there's one big problem area, namely when gamma(x) is close to +-1, so that lgamma(x) is close to 0. This happens for x ~ 1.0, x ~ 2.0, and various negative values of x (mathematically, lgamma(x) hits zero twice in each interval (n-1, n) for n \<= -2). In that case the accuracy is still good in absolute terms (around 1e-15 absolute error), but terrible in ulp terms.

    I think it's reasonable to allow an absolute error of a few times the machine epsilon in lgamma: for many uses, the lgamma result will be exponentiated at some point (often after adding or subtracting other lgamma results), and at that point this absolute error just becomes a small relative error.

    cfc9f3e0-e33f-4ecd-9ddd-4123842d6c1e commented 14 years ago

    I agree with your reasoning.

    Any chance of also getting the incomplete beta and gamma functions, needed to compute the CDF of many commonly used probability distributions? (Beta, Logarithmic, Poisson, Chi-Square, Gamma, etc.)

    ned-deily commented 14 years ago

    FYI, approximately 20 of the gamma test cases fail on PPC Macs. Attached are snippets from regrtest runs with trunk and with py3k, both on a G4 ppc (32-bit) running OS X 10.5. Identical failures for trunk (did not try py3k) were observed on a G3 ppc with OS X 10.4.

    tim-one commented 14 years ago

    FYI, mysterious numeric differences on PPC are often due to the C compiler generated code to use the "fused multiply-add" HW instruction. In which case, find a way to turn that off :-)

    ned-deily commented 14 years ago

    For the record, these OS X installer build scripts (running on 10.5) results in these gcc options:

    gcc-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk - fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 - I/tmp/_py/libraries/usr/local/include -I. -IInclude -I/usr/local/include -I/private/tmp/_t/Include -I/private/tmp/_py/_bld/python -c /private/tmp/_t/Modules/mathmodule.c -o build/temp.macosx-10.3-fat- 3.2/private/tmp/_t/Modules/mathmodule.o

    and:

    $ gcc-4.0 --version
    powerpc-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5493)
    mdickinson commented 14 years ago

    Hmm. It looks as though the gamma computation itself is fine; it's the accuracy check that's going wrong.

    I'm suspecting an endianness issue and/or a struct module bug.

    Ned, do you still get these failures with a direct single-architecture build on PPC? Or just for the universal build?

    tim-one commented 14 years ago

    Adding

    -mno-fused-madd

    would be worth trying. It usually fixes PPC bugs ;-)

    mdickinson commented 14 years ago

    It was a stupid Mark bug. I'd missed out a '\<' from the struct.unpack call that was translating floats into integers, for the purposes of computing ulps difference. Fixed in r75454 (trunk), r75455 (py3k).

    Thanks for reporting this, Ned.

    mdickinson commented 14 years ago

    BTW, the gamma computation shouldn't be at all fragile, unless I've messed up somewhere: the Lanczos sum is evaluated as a rational function in which both numerator and denominator have only positive coefficients, so there's little danger of nasty cancellations boosting the relative error. I'd be surprised if use of fma instructions made more than 1 or 2 ulps difference to the result, but I'll take a look.

    tim-one commented 14 years ago

    Mark, you needn't bother: you found the smoking gun already! From your description, I agree it would be very surprising if FMA made a significant difference in the absence of catastrophic cancellation.

    ned-deily commented 14 years ago

    Verified that r75454 makes the gamma test errors go away on a PPC Mac.

    5b87aa25-ea37-4f90-a8a2-e73363e1bc0a commented 14 years ago

    having the gamma function, one certainly needs the other Euler constant: C = 0.5772....

    the C constant is taken from GSL

    5b87aa25-ea37-4f90-a8a2-e73363e1bc0a commented 14 years ago

    for expm1, we use the Taylor development near zero, but we have to precise what means small value for x. here this means abs(x) \< math.log(2.0). one can also use abs(x) \< C

    5b87aa25-ea37-4f90-a8a2-e73363e1bc0a commented 14 years ago

    finally, for the error function, with the same kind of idea as with expm1, here is a pure python definition

    these patches are against r27a1:76674

    if all these make sense, i'll check for NAN, infinity, test suite...

    5b87aa25-ea37-4f90-a8a2-e73363e1bc0a commented 14 years ago

    may be some error when i send this the first time, i cannot see the file, so i resend mathmodule.c.diff sorry if you get it twice

    mdickinson commented 14 years ago

    nirinA: thanks for prodding this issue. Yes, it is still alive (just). :)

    About adding the Euler constant: I agree this should be added. Do you have time to expand your patch to include docs and a test or two?

    For expm1, I was planning to use the libm function when it exists (which I expect it will on most platforms), and just use a quick hack based on the identity

    expm1(x) = sinh(x/2)*exp(x/2)

    for platforms where it's missing.

    I'll look at the erf and erfc implementations.

    Daniel: re incomplete beta and gamma functions, I'd prefer to limit this particular issue to the C99 math functions. It's difficult to know where to draw the line, but it seems to me that these are really the domain of numpy/scipy. And they involve multiple input arguments, which makes them *darned hard* to write and test! Anyway, if you'd like to see those added to Python's math module, maybe this could be brought up on python-dev to see how much support there is.

    mdickinson commented 14 years ago

    expm1(x) = sinh(x/2)*exp(x/2)

    Should be 2*sinh(x/2)*exp(x/2).

    mdickinson commented 14 years ago

    lgamma patch committed to trunk (with looser tests) in r76755.

    mdickinson commented 14 years ago

    nirinA: thanks for the erf patch. It's fine as far as it goes; the main technical issue is that the series development only converges at a sensible rate for smallish x; for larger x something more is needed.

    Here's a prototype patch for the erf and erfc functions that uses a power series (not quite the same as the series nirinA used; by pulling out a factor of exp(-x*2) you can make all coefficients in the remaining series positive, which is good for the numerics) for small x, and a continued fraction expansion for larger x. The erf and erfc functions are currently written in Python. I'll do some more testing (and possibly tweaking) of this patch and then translate the Python to C.

    mdickinson commented 14 years ago

    Here's the C version.

    mdickinson commented 14 years ago

    Here's a patch to add expm1. Rather than putting the code in pymath.c, which is the wrong place (see issue bpo-7518), I've added a new file Modules/_math.c for this; log1p, atanh, etc. should also eventually be moved from pymath.c into this file.

    I'm fairly confident that the maths and numerics are okay, but less confident about my changes to the build structure; if anyone has a chance to test this patch, especially on Windows, I'd welcome feedback. I've tested it on OS X, and will test on Linux.

    mdickinson commented 14 years ago

    I committed the patch for expm1 in r76861 (trunk) and r76863 (py3k), with one small change: instead of using 2 exp(x/2) sinh(x/2), expm1 is now computed using a method due to Kahan that involves only exp and log. This seems a little safer, and guards against problems on platforms with a naive implementation of sinh. (I *hope* no such platforms exist, but suspect that's wishful thinking. :-)

    Thanks to Eric Smith for testing the build system changes on Windows.

    mdickinson commented 14 years ago

    Added erf and erfc in r76879 (trunk), r76880 (py3k).

    mdickinson commented 14 years ago

    Oops. That's r76878 (trunk) and r76879 (py3k).