Open jgmbenoit opened 2 years ago
Travis supports these. Could set up a Travis-based CI to debug.
I have added a .travis.yml
config file but apparently Travis is not willing to recognize my repo. All my other repos are visible in Travis but not this one. @szhorvat have you seen anything like this before?
Edit: nevermind, got it, apparently I had to "migrate" the repo to travis-ci.com first.
The build fails on s390x with Travis, but apparently only in release mode.
When this issue was opened, I attempted to compile it for x87, but even with SSE disabled, I get an error that this is not possible. If that's easily fixable, it might be worth a try as it would make debugging simpler.
I can't test this easily for x87 as I am on Apple Silicon and -mfpmath=387
fails there; gcc
11 from Homebrew says unrecognized command-line option '-mfpmath=387'
whle Clang returns compiler errors like SSE register return with SSE disabled
. What error messages did you see when compiling for x87?
This was my mistake. It doesn't work with Clang on macOS, but it does work with GCC. Of course this is Intel-specific so it won't work on your M1 machine.
The test suite passes.
For reference, I used:
CC=gcc-mp-11 cmake .. -DPLFIT_USE_SSE=OFF -DCMAKE_C_FLAGS=-mfpmath=387
where gcc-mp-11
is GCC 11.2.0 from MacPorts.
The ppc64le
build seems to work but I just realized that this is probably identical to ppc64el
on Debian, which also passes. Travis does not offer the "pure" ppc64
or powerpc
architectures. Still, there is a genuine failure on s390x
so that's what we should focus on, hoping that solving it will also solve the problem on the other two platforms.
@jgmbenoit Something weird is going on with the hsl_sf_hzeta()
function on s390x
. Take a look at this recent test log from Travis. I have added debug output (search for DEBUG:
) that prints the components of the formula that calculates the log-likelihood of a discrete test data set. On my machine the correct output is:
DEBUG: xmin = 2.0000, alpha = 2.5800, result = 6010.4751, m = 5457
DEBUG: hsl_sf_lnhzeta(alpha, xmin) = -1.1639
The output from the test log on s390x
is:
DEBUG: xmin = 2.0000, alpha = 2.5800, result = 6010.4751, m = 5457
DEBUG: hsl_sf_lnhzeta(alpha, xmin) = -0.2244
Note that the difference between the expected and the observed log-likelihood in the test is m * (-1.1639 + 0.2244)
so the math checks out; hsl_sf_lnhzeta()
is returning an invalid value for alpha=2.58
and xmin=2
. However, the same alpha and xmin tested in isolation in a newly added test_hzeta.c
file yields the correct result on s390x
as well!
Do you have any ideas about what may be going on here? Or, alternatively, do you have a detailed test suite for hsl_sf_lnhzeta()
that I could plug into the plfit
tests to get more insights?
I have indeed a test suite for the zeta
related functions. I confess that I only run them on my amd64
boxes. The tests are against values obtained with Maple. The tests fit well in my HSL library source (my own Home Scientific Library based on GSL). I can adapt it to plfit
source, but I need some time to make this. I will try to submit a patch by the end of the coming week-end.
My first guess is that one of the used native C math functions does not work as expected on this architecture. If so, my fist move will be to check how this is managed in GSL (upstream and at packaging level).
Note that, as Debian Developer, I have actually access to these architectures. I will try to have a look and submit a fix by the end of this week-end. This sounds as a good challenge.
Thanks a lot for your help! Looking forward to the patched version.
I have reverted master
to the pre-debugging state and branched off into a debug/hzeta
branch, so if you want to investigate the debug statements yourself, please check out debug/hzeta
.
The numerical tests for zeta
functions works. However the issue persists.
I could trace back the issue up to the inline
function hsl_sf_hZeta0_zed
which play with long double
.
Commenting the inline
does not solve the issue. It appears that the long double
math functions gives totally foolish values. Any hints ?
long double
is not available on every platform, is it? Or if it is, it may not work the same way. On Intel, it is 80 bits.
I would avoid relying on long double
when possible ...
My understanding is that gcc
ruins some tricky numeric codes by optimizing them: gcc
is too smart or not enough. These codes involve long double
numbers and some arithmetic tricks to keep last digits. The initial formula involves cancellations that are hard to manage when we compute in fixed precision. I also agree that long double
computation must be avoided. So this forces me to revisit this key part of the computation: this is a numerical issue not a coding issue. This comeback on the subject was profitable since I can see now how to transform the formula into a more sympathetic form. I think now that I can manage better the cancellations and I can stay with double
numbers. I working on it, and it may take some time. Thanks to exotic architectures for helping us for better programming :)
long double is not available on every platform, is it?
long double exists on every platform, but it's characteristics are all over the place. It might be the same as double, it might be Intel x87 80 bit extended precision, it might be powerpc "double double" or it might be IEEE quad precision. It may also be implemented in software even when regular floating point is implemented in hardware. So it really doesn't make sense to use in portable code.
gcc on s390x has build flags -mlong-double-64 and -mlong-double-128. When building with -mlong-double-64 the unit tests are passed. With -mlong-double-128:
4: Test command: /<<PKGBUILDDIR>>/_BUILD_SHARED/test/test_sampling
4: Test timeout computed to be: 10000000
1: ./test/test_discrete.c:39 : expected -9155.62809000, got -14282.71356713, difference: -5127.09
1/8 Test #1: test_discrete ....................***Failed 0.00 sec
./test/test_discrete.c:39 : expected -9155.62809000, got -14282.71356713, difference: -5127.09
test 5
Start 5: test_underflow_handling
5: Test command: /<<PKGBUILDDIR>>/_BUILD_SHARED/test/test_underflow_handling
5: Test timeout computed to be: 10000000
3: ./test/test_real.c:41 : expected -245.14869000, got -313.14725000, difference: -67.9986
2/8 Test #3: test_real ........................***Failed 0.00 sec
./test/test_real.c:41 : expected -245.14869000, got -313.14725000, difference: -67.9986
Either the unit test for log-likelihood should be removed or long double should not be used in the package.
Either the unit test for log-likelihood should be removed or long double should not be used in the package.
The second option is the one to choose: long double should not be used in the package . And I am working on it.
This comeback on the subject was profitable since I can see now how to transform the formula into a more sympathetic form. I think now that I can manage better the cancellations and I can stay with
double
numbers.
With the help of my favourite Computer Algebra System I could finally isolate and manage properly the faulty cancellations. I will now focus on the implementation of the analytical result. This is the easy part.
Thanks, looking forward to this!
Hello. This is to note that the issue was apparently fixed with this commit, which is already in version 0.9.5:
https://github.com/ntamas/plfit/commit/0559e4683ec72d7608560414d8d6797f83c74ea7
In fact, I logged to a s390x machine running Debian 12 (bookworm), checked that version 0.9.4 failed to build (because of the tests), then applied the above patch over version 0.9.4, and then the tests started to pass.
For this reason, when we (Debian) packaged version 0.9.5, the package started to build ok without doing anything else:
https://buildd.debian.org/status/logs.php?pkg=plfit&arch=s390x https://buildd.debian.org/status/logs.php?pkg=plfit&arch=powerpc https://buildd.debian.org/status/logs.php?pkg=plfit&arch=ppc64
Thanks.
Thanks for the feedback! This is very valuable since we can't easily test on these platforms.
Getting rid of long double
would still be preferable for the reasons stated above.
Getting rid of
long double
would still be preferable for the reasons stated above.
This is absolutely true. The issue is a numeric issue. I am still working on it. Since my last message here, I have done some progress. It has been far harder than I thought, but I went somewhere. I am now trying to articulate the analysis. Thanks for your patience.
see the Build status of the plfit Debian package for more details.