Let be $Sn = {\sum}^{n}{i=1} a_i$ a sequence of partial sums. This repository contains implementations of the following series transformations, which generate a new sequence $T_n$:
Aitken's transformation (or delta-squared process):
esum
: O($2n\log n$)acelsum
: O($n$)$$Tn = \frac{S{n-1} S_{n+1} - Sn^2}{S{n+1} - 2 Sn + S{n-1}}.$$
Richardson's transformation (modify, with given p):
esum
: O($2(\log n)^2$)acelsum
: O($\log n$)$$Tn = S{2n} + \frac{S_{2n} - S_n}{2^p - 1}.$$
Here, we use $p = 1$ for simplicity.
esum
: O($2n\log n$)acelsum
: O($n$)Let be the auxiliary sequence $\varepsilon_n^j$ defined by:
$$\varepsilon{-1}^{j} = 0\ \text{and}\ \varepsilon{0}^{j} = S_j,$$
and inductively:
$$\varepsilon{k+1}^{j} = \varepsilon{k-1}^{j+1} + [\varepsilon{k}^{j+1} - \varepsilon{k}^{j}]^{-1}.$$
Then, $Tn = \varepsilon{n-1}^{2}$ (because the odd steps are only partial steps).
esum
: O($4n\log n$)acelsum
: O($2n$)Let be two auxiliary sequences $s_j^{(n)}$ and $r_j^{(n)}$ defined by:
$$s^{(n)}_0 = 1,\ r^{(n)}_1 = x_n,\ n=0,1,\ldots,$$
inductively:
$$s^{(n)}_{k+1} = s^{(n+1)}_{k} \left( \frac{r^{(n+1)}_{k+1}}{r^{(n)}_{k+1}} - 1 \right),\ k,n = 0,1,\ldots$$
and
$$r^{(n)}_{k+1} = r^{(n+1)}_{k} \left( \frac{s^{(n+1)}_{k}}{s^{(n)}_{k}} - 1 \right),\ k=1,2,\ldots;\ n=0,1,\ldots$$
Then, $T_n = S_n - \frac{S_{n+1} - S_{n}}{r^{(n+1)}_{1} - r^{(n)}_{1}} r^{(n)}_{1}$.
esum
: O($4n\log n$)acelsum
: O($2n$)This method is defined by
$$W_n^{(k)} = \frac{M_n^{(k)}}{N_n^{(k)}}$$
where
$$M_n^{(0)} = \frac{S_n}{g(n)},$$
$$M{n}^{(k+1)} = \frac{M{n+1}^{(k)} - M{n}^{(k)}}{a{n + k}^{-1} - a_{n + 1}^{-1}},$$
and
$$N_n^{(0)} = \frac{1}{g(n)},$$
$$N{n}^{(k+1)} = \frac{N{n+1}^{(k)} - N{n}^{(k)}}{a{n + k}^{-1} - a_{n + 1}^{-1}}.$$
For the function $g(n)$, we have some classic choices for this function:
Then, $T_n = \frac{M_n^{(1)}}{N_n^{(1)}}$.
Make sure you have the mpmath library installed:
pip install mpmath
To install the package, run the following command:
pip install extrapolation
We have the transformations implemented above, and for use have the esum
and acelsum
function.
The esum
receives on input:
This function determines the minimum value of n for which, the difference between the last partial sums becomes less than the specified error when applying the transformation. And returns the series applied to the transformation. The following is an example:
from extrapolation import esum
import math
# Test with no_transform (without transformation) and with Richardson transformation the basel problem
n0, no_accelerated = esum(lambda x: 1/x**2, 'None', error=1e-12, logarithm=False, precision=100)
n1, accelerated = esum(lambda x: 1/x**2, 'Richardson', error=1e-12, logarithm=False, precision=100)
# Comparison
print(f"True value: {math.pi ** 2 / 6}")
print(f"Without acceleration: {no_accelerated[-1]}, with {n0} iterations")
print(f"With acceleration: {accelerated[-1]}, with {n1} iterations")
Out:
True value: 1.6449340668482264
Without acceleration: 1.6449330668487264267523920296, with 1000000 iterations
With acceleration: 1.6449340611255960068665838135, with 22896 iterations
We have also the acelsum
function, that receives on input:
This function calculates partial sums up to a given natural value, returning the result in log-scale or normal by applying a chosen transformation. The following is an example:
from extrapolation import acelsum
import math
# Test with no_transform (without transformation) and with Richardson transformation the basel problem
no_accelerated = acelsum(lambda x: 1/x**2, 'None', n=1000, logarithm=False, precision=100)
accelerated = acelsum(lambda x: 1/x**2, 'Richardson', n=1000, logarithm=False, precision=100)
# Comparison
print(f"True value: {math.pi ** 2 / 6}")
print(f"Without acceleration: {no_accelerated[-1]}")
print(f"With acceleration: {accelerated[-1]}")
Out:
True value: 1.6449340668482264
Without acceleration: 1.6439345666815597935850701245
With acceleration: 1.6449310678482254269248263997