Closed axkr closed 3 years ago
The Carlson elliptic integrals cannot be computed analytically using simpler functions. They should either be considered first class functions by themselves (like is often done for gamma or erf functions for example), or considered to be the results of numerical algorithms. This is in fact similar to exponential function, which is really defined as the limit of an infinite series, but we tend to just put it in our set of base transcendental function and never dig deeper when we use it in an analytical formula.
The duplication algorithms are the current standard way to compute these integrals but these algorithms just converge to the value, they never reach it, so an analytical representation could only be the nth step of the algorithm when the convergence criterion decides to stop the computation. The algorithms locates a neighborhood of the mean of the parameters small enough to allow a polynomial approximation of the integral within this small area to be as accurate as the prescribed tolerance and to compute the integral in the initial parameters from this polynomial approximation (just like you could compute sin(α) from sin(α/2) and cos(α/2) and apply this several time until you get an angle small enough).
In the original Carlson papers, the tolerance threshold is a user input, just like the integral parameters. It represents relative accuracy. In order to be able to look at these integrals just like if they were absolute functions (like gamma, erf, exponential or sine), and because the algorithms converge so quickly (you usually get 15 digits in 3 or 4 iterations and get 200 digits in a dozen iterations), I decided to use ulp(1) as the threshold and not make it user customizable, so integrals are always computed as accurately as possible and user is not aware there is a converging algorithm somewhere.
I will look if I can have the threshold user configurable but it may increase yet again the number of signatures, which is already large. I am working my way up in the elliptic stuff and use Carlson integrals to compute Legendre integrals, K, K', E, D, F, Π, in complete and incomplete cases, with four parameter types (real, field, complex, field complex) so I already have 32 signatures and I would prefer to avoid going to 64 signatures by allowing either a default tolerance or a user tolerance. I have an idea but am not sure it will work, mainly because we cannot know beforehand what would be the accuracy of a field (think about Dfp which has arbitrary precision, and was indeed used to compute these functions to very high accuracy).
Back to your needs, I don't know what could be the solution. From a theoretical point of view, analytical computation corresponds to 0 ulp, as ulp is only intended to quantify the representation error we make when we go from analytical view to numerical implementation. However, as Carlson integrals are the results of numerical computation with finite precision and convergence, we have to find some trade-off here. Do you think a user-provided tolerance (non-zero) would allow you to use these integrals?
Note also that this is work in progress. I am very happy that you already look at it so we can find a better design before publishing 2.0. I hope the stream of recent changes do not harm your work too much.
@maisonobe , @mtommila - just to be clear I created an incomplete repository (https://github.com/axkr/ApfloatCalculusField) for my scenario which uses apfloat.org
as an arbitrary precision arithmetic library.
In which scenarios can I use the CalculusFieldElement
interface? What methods are required? I'd thought it would be suitable in this case:
This JUnit test:
creates a "Division by zero":
java.lang.ArithmeticException: Division by zero
at org.apfloat.Apfloat.divide(Apfloat.java:838)
at org.symja.experimental.ApfloatElement.divide(ApfloatElement.java:117)
at org.symja.experimental.ApfloatElement.divide(ApfloatElement.java:1)
at org.hipparchus.special.elliptic.carlson.RcFieldDuplication.convergenceCriterion(RcFieldDuplication.java:47)
at org.hipparchus.special.elliptic.carlson.FieldDuplication.<init>(FieldDuplication.java:66)
at org.hipparchus.special.elliptic.carlson.RcFieldDuplication.<init>(RcFieldDuplication.java:35)
at org.hipparchus.special.elliptic.carlson.CarlsonEllipticIntegral.rC(CarlsonEllipticIntegral.java:102)
at org.symja.experimental.TestHipparchus.testCarlson(TestHipparchus.java:16)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:168)
Note also that this is work in progress. I am very happy that you already look at it so we can find a better design before publishing 2.0. I hope the stream of recent changes do not harm your work too much.
Of course I understand that new and advanced features can change rapidly in a Snapshot release. So, no problem.
I wonder if ulp
for Apfloat
should be implemented as FasMath.pow(x.radix(), -x.precision()) * FastMath.abs(x)
.
I wonder if
ulp
forApfloat
should be implemented asFasMath.pow(x.radix(), -x.precision()) * FastMath.abs(x)
.
Forget it, I see there is already an ApfloatMath.ulp()
implementation.
I found why you get a division by zero.
In order to get the ultimate relative accuracy for Carlson duplication algorithm, the Hipparchus implementation uses a0.getField().getOne().ulp()
. This means that from the algebraic field (real, complex, automatic differential, Dfp or in your case Apfloat), we get the representation of constant 1.0 and get its ulp. For primitive double numbers this is the epsilon (2^-52, i.e. 2.2 10^-16) but for high accuracy like Dfp of Apfloat it is expected to be much smaller. Our assumption, is that ulp is the difference between two successive numbers in the current encoding.
However, it seems that for apfloat, the assumption for ulp is that if a number can be encoded exactly (like 0 or in fact any integer), then the ulp is set to 0. That would mean that ulp is not a difference between two encoded numbers, but a difference between an encoded number and the absolute real number it represents. Hence as 0 (or 1) is represented exactly, its ulp is set to 0, despite the next number is some distance away. There is a logic here, but it breaks some algorithms that rely on ulp for convergence, including our implementation of Carlson duplication (it would also break several other algorithms like Hermite rule factory, symm LQ, and quite a few unit tests in both Hipparchus and Orekit that use ulp(1) for test threshold).
I tried a first workaround by using ApfloatMath.nextUp(x).subtract(x)
but it also failed because ApfloatMath.nextUp
in fact uses ulp
and therefore adds 0 when x is an integer…
I tried a second workaround by implementing:
public ApfloatElement ulp() {
return valueOf(fApfloat.divide(new Apfloat(FastMath.pow(fApfloat.radix(), fApfloat.precision()), fApfloat.precision())));
}
but if failed too because fApfloat.precision())
returns INFINITE
since 1 is an instance of Aprational
In fact, as soon as an Apfloat
detects something it considers perfect, you looses the information about the accuracy you started with (i.e. 25 digits in your test case) and replaces it with infinity.
I don't see any way to solve this at Hipparchus level, as long as ApfloatMath.ulp(x)
does not represent differences between successive numbers, which is what a number of algorithms expect. You may try to make a feature request on apfloat for either a change in the semantic of ulp
or an additional method in ApfloatMath
that would not return 0 and a way to not loose the user accuracy settings for values like 0 or small integers.
@maisonobe can a method Field#getUlp()
be a solution?
Depending on the current thread a precision must be fix and cannot vary I think. This can for example be implemented with a ThreadLocal map?
Yes, I suppose you need to have some kind of a "context" which specifies the precision of the numbers that are used in the computation.
If you use Apfloat integers (like 0 or 1) then yes they will have "inifinite" precision by default. However for all other values than zero, it's possible to specify the number's precision, and you can set it to any value you like. It's clear that for e.g. primitive doubles the ulp
is 2^-52 times the number but when you say that for Apfloat the ulp
"is expected to be much smaller" then there is perhaps a fundamental misunderstanding here. The precision (or ulp) is not fixed but it must be specified and then it's as small as you want. Well, there are defaults for the precision of course but as you found out, they are not suitable in this case.
If you only need the constant 1
then I think the problem can in general be solved by just setting the precision of the constant to the desired precision.
If you also need zero then things can get a bit trickier but in that case maybe you could use either the helper class FixedPrecisionApcomplexHelper
or FixedPrecisionApfloatHelper
, depending on if you need complex numbers or just real numbers. For these helper classes, you can specify the precision, and then all mathematical operations ran through them return the result to the specified precision. However ulp()
of zero would still return zero.
It should be also noted that in the apfloat library the precision does not mean the number of significant figures to the right of the decimal point, but just the number of significant digits overall.
I'm not familiar with the algorithms used, but in pseudo-code you could use constructs such as:
long precision = X;
Apfloat one = new Apfloat(1, precision);
Apfloat ulp = ApfloatMath.ulp(one);
or
long precision = X;
FixedPrecisionApfloatHelper helper = new FixedPrecisionApfloatHelper(precision);
Apfloat ulp = helper.ulp(new Apfloat(1));
Quick and dirty test in commit
works as you've described it.
So I will refactor my app so that if a precision is set in a thread it can create a new ApfloatField
(or in the Symja app the ExprField
) object.
Thank you very much for your support.
@mtommila @maisonobe What is needed to improve the apfloat results for greater precisions?
See:
Should the algorithms be directly implemented in apfloat?
If I understood correctly, the Hipparchus library is kind of like JScience is, at least for the part that you can implement your own Field
implementation and then the Hipparchus library provides some generic algorithms that operate on any implementation of the Field
interface.
The apfloat library has "Field" implementations for JScience, for example http://www.apfloat.org/apfloat_java/docs/org/apfloat/jscience/FixedPrecisionApcomplexField.html http://www.apfloat.org/apfloat_java/docs/org/apfloat/jscience/FixedPrecisionApfloatField.html
The two above use a fixed precision (that can be set to an arbitrary value). There are also arbitrary-precision versions of "Field" for apfloat JScience but something like that might not be very useful if the Hipparchus library does not provide particular support for a Field
where also the precision of numbers can be set to an arbitrary value.
Thus, what might be useful is to create an implementation of a Hipparchus Field
that is backed by an Apfloat or Apcomplex, with a fixed precision. This can be done easily using a FixedPrecisionApfloatHelper
or FixedPrecisionApcomplexHelper
.
I'm not familiar with the Hipparchus library so if the algorithms and the library in general limit the precision to that of a double
(approx. 16 decimal digits) then of course such a Field
implementation might not be useful at all.
I also noticed that there is an interface CalculusFieldElement
in Hipparchus, which has lots of additional methods, but it would seem like most of them are also in FixedPrecisionApfloatHelper
and FixedPrecisionApcomplexHelper
or could be easily implemented.
And indeed I can see that @axkr has already done an implementation of CalculusFieldElement
for Apcomplex. However I would perhaps implement it somewhat differently, e.g.
FixedPrecisionApcomplexHelper
(and thus also the precision) to the Field
implementation25
ApcomplexElement
should reference the ApcomplexField
(and use the helper through that)ApcomplexMath
directly for anything, instead use the FixedPrecisionApcomplexHelper
from the fieldIf I understood the purpose of the Hipparchus library correctly then no, algorithms should not be directly implemented in apfloat...?
If I understood correctly, the Hipparchus library is kind of like JScience is, at least for the part that you can implement your own
Field
implementation and then the Hipparchus library provides some generic algorithms that operate on any implementation of theField
interface.
Yes.
but something like that might not be very useful if the Hipparchus library does not provide particular support for a
Field
where also the precision of numbers can be set to an arbitrary value.
There are no limitations there, and in fact we also check the algorithms with our own Dfp
(Decimal Floating Point) class that is also arbitrary precision (but much less efficient than Apfloat
)! This is the reason why we need to get the ulp
from the field implementation and not the primitive double ulp
. Field
are used for several different purposes. One is for increased precision (with Dfp
or Apfloat
), another one is automatic differentiation (with one or several variables, at any derivation order), still another one is Tuple
computation (but there are huge limitations there), and Complex
, and we are also thinking about interval arithmetic.
If I understood the purpose of the Hipparchus library correctly then no, algorithms should not be directly implemented in apfloat...?
It's up to you. Apache License and MIT license are mostly compatible, so if you want to grab some functions from Hipparchus to Apfloat, you are welcome to do so. The higher level algorithms like Ordinary Differential Equations, statistics, geometry, optimization, linear algebra... are probably out of scope for Apfloat
, but the special functions like elliptic functions may be of interest to you (but the implementation is not stabilized yet). I would also like to add arbitrary precision Gamma and Bessel functions, but did not start it yet.
In fact, I started developing mathematical libraries with Mantissa and JScience grabbed it at some point. The I merged Mantissa into Apache Commons Math and continued development there, before the development team split and the Hipparchus project was started. The Field
part however was created during the Apache time, so after Mantissa was merged, and therefore also after JScience grabbed it so I guess they developed it independently.
@mtommila
I implemented your suggestion to use FixedPrecisionApfloatHelper
only for ApfloatMath.round
I didn't found equivalent operation
Is this the correct way?
Does someone has a good example (for example inverse of matrix) which can be used as JUnit test case to test the higher precision? My current JUnit testscases are useless for that purpose:
At least the rint
implementation does not look quite right, it should probably use the scale of the number for rounding if it's > 0 and if not then the result is 0, -1 or 1 depending on the value of the number.
Something like:
public static Apfloat apfloatRint(Apfloat fApfloat) {
if (fApfloat.scale() > 0) {
return ApfloatMath.round(fApfloat, fApfloat.scale(), RoundingMode.HALF_EVEN);
}
if (ApfloatMath.abs(fApfloat).compareTo(new Apfloat("0.5")) <= 0) {
return Apfloat.ZERO;
}
return ApfloatMath.copySign(Apfloat.ONE, fApfloat);
}
Can we consider the issue closed now?
Yes it can be closed
What is the correct use of the
ulp
method? For symbolic "exact number" classes I would like to return0
. Is this correct?In the following case I'm using an instanceof of the "not exact number" apfloat.org library. Can the
CalculusFieldElement#ulp()
method from type 'T` be used in the following case?Otherwise I get division by 0 errors. Should
convergenceCriterion()
handle the0
case?