km-git-acc / dbn_upper_bound

Computational effort to upper bound the de Bruijn-Newman constant as part of a Polymath project
Other
13 stars 12 forks source link

Next steps after conditional DBN <= 0.11 #113

Open km-git-acc opened 5 years ago

km-git-acc commented 5 years ago

@rudolph-git-acc Opening the new thread.. Agree dbn < 0.1 is a good place to stop. I think the tloop part could potentially also be passed through Boinc. Will try out on a smaller X to see how it goes (essentially apart from t1,t2,y parameters (appropriately chosen for each job) we would also pass an additional parameter to the boinc job - the web address of the relevant storedsums file).

I will check if there are readymade scripts for RH verification to help in detailed understanding of the process involved. The one at https://www.wstein.org/simuw/misc/zeta_grid.html seems to be a dead link. Is Gourdon-Demichel's code available?

km-git-acc commented 5 years ago

@rudolph-git-acc Thanks. Will submit any new jobs using the latest script.

The N2 function below for root counting is working in gp (although only for small values of T1 and T2), and provides a speed advantage as well.

tht(T) = (-I/2)*(log(gamma(1/4 + I*T/2))-log(gamma(1/4 - I*T/2))) - log(Pi)*T/2;
Z(T) = exp(I*tht(T))*zeta(1/2 + I*T);

horiz(T) = intnum(s=100+I*T,1/2+I*T,zeta'(s)/zeta(s))
verti(T) = intnum(s=100,100+I*T,zeta'(s)/zeta(s))
delta_verti(T1,T2) = intnum(s=100+I*T1,100+I*T2,zeta'(s)/zeta(s))

N(T) = tht(T)/Pi + imag(verti(T)+horiz(T))/Pi;
N1(T1,T2) = (1/Pi)*(tht(T2) - tht(T1) + imag(verti(T2) + horiz(T2)) - imag(verti(T1) + horiz(T1)));
N2(T1,T2) = (1/Pi)*(tht(T2) - tht(T1) + imag(horiz(T2) - horiz(T1) + delta_verti(T1,T2)));

stats(T1,T2) = {
st=getwalltime();print(N(T2)-N(T1));en=getwalltime();print(en-st);
st=getwalltime();print(N1(T1,T2));en=getwalltime();print(en-st);
st=getwalltime();print(N2(T1,T2));en=getwalltime();print(en-st);
}

The behavior of zeta alone in the negative s plane may be chaotic, which could be causing the infinity errors. The formula used by Fredrik comes through the Hardy Z function https://en.wikipedia.org/wiki/Z_function which is conjugate symmetric and more stable.

rudolph-git-acc commented 5 years ago

@km-git-acc

Great. I will try that in ARB immediately!

Have now also created and uploaded a fully matrix-free Tloop script (Tloopnomat) that keeps memory flat at around (all matrices have been removed). It still allows the detailed print (e.g. to verify ABB_eff accuracy), however the output will not be in the correct sequence of the mesh points so this data should be used with care. This script should not encounter any Boinc memory issues.

EDIT 1: Have replicated the script in ARB and have a few observations:

EDIT 2: Did a Tloop test run on the 9x10e21 case and the first rectangle was produced in ~4 minutes (on a 4 thread machine). The first 'num' value is 11148600 so roughly a factor 3 higher than the 2x10e20 run. think your guess of 60k rectangles is in the right ballpark. I will be travelling abroad next week and plan to put the 64 thread 'beast' up for this task. It should be doable in 7-8 days. Have also selected one value from the detailed output to check accuracy against the real ABBeff (it will probably run all night).

km-git-acc commented 5 years ago

@rudolph-git-acc Amazing. I had thought eliminating vectorization would slow things down, but that doesn't seem to be the case. Will submit the first half of the grid jobs with this. Also will be interesting to see how the output differs from the local machine to the grid. To explain the detailed output is correct despite the print sequence, I have this example

vectorized method
--------------------------------
Rectangle(1) : 0.25, 20, 6, 0.0000000000, 1.3780326595315174257, 20
0.25, 0.2, 60000155019, 1.71122848892101756279232171662 + 0.0571419836195889948644500370620*I         0
0.25, 0.36, 60000155019, 1.63513754913417685995968082866 - 0.0916584261286675476930795478425*I        1
0.25, 0.52, 60000155019, 1.56831524396056539837033571308 - 0.101846809734581515839533672987*I         2
0.25, 0.68, 60000155019, 1.51022668159175040572896170684 - 0.0906272465770796498492191477996*I        3
0.25, 0.84, 60000155019, 1.45985920818278585950986955673 - 0.0781854062099169632234540616749*I        4
0.25, 1, 60000155019, 1.41603492282122645614940483947 - 0.0675266444231709396618909136964*I           5
0.25, 1, 60000155019.2, 1.42376925349847571121667889395 - 0.0152083341036381748962136561197*I         6
0.25, 1, 60000155019.4, 1.42237082706401380855278627492 + 0.0381968735592510024939265704026*I         7
0.25, 1, 60000155019.6, 1.41156878697027044797176385280 + 0.0903631100500691411181646114823*I         8
0.25, 1, 60000155019.8, 1.39225479140069997770917114519 + 0.138905086712818194196056661057*I          9
0.25, 1, 60000155020, 1.36591866676164819409376388070 + 0.182318969192973711095494229251*I            10
0.25, 0.84, 60000155020, 1.40020796618818931386448518627 + 0.207963137975477478882040569487*I         11
0.25, 0.68, 60000155020, 1.43908625442001305545977999593 + 0.238405468215662012009669573470*I         12
0.25, 0.52, 60000155020, 1.48616667643162659196445518806 + 0.273820924642714850832986999730*I         13
0.25, 0.36, 60000155020, 1.56022372577383925678298046514 + 0.310270097565195618716281440388*I         14
0.25, 0.2, 60000155020, 1.76713396321070036311184927439 + 0.322324608577391970707236343351*I          15
0.25, 0.2, 60000155019.8, 1.49352027942829916988027302315 + 0.167326995974741298721809057966*I        16
0.25, 0.2, 60000155019.6, 1.68575072430787983091336722222 + 0.415865286835321583885467148208*I        17
0.25, 0.2, 60000155019.4, 1.95034975370338820208145019284 - 0.0120458372314892329478820083194*I       18
0.25, 0.2, 60000155019.2, 1.56276661916335907498222539712 - 0.168363772640894874803610384499*I        19

in-situ method
-----------------------------------------------
Rectangle(1) : 0.25, 20, 6, 0.0000000000, 1.3780326595315174257, 20
0.25, 0.2, 60000155019, 1.71122848892101756279232171662 + 0.0571419836195889948644500370620*I         0    fs1    m2
0.25, 1, 60000155019, 1.41603492282122645614940483947 - 0.0675266444231709396618909136964*I           5    fs2    m4
0.25, 0.36, 60000155020, 1.56022372577383925678298046514 + 0.310270097565195618716281440388*I         14   fs3    m1
0.25, 0.2, 60000155019.2, 1.56276661916335907498222539712 - 0.168363772640894874803610384499*I        19   fs4    m2
0.25, 0.36, 60000155019, 1.63513754913417685995968082866 - 0.0916584261286675476930795478425*I        1
0.25, 1, 60000155019.2, 1.42376925349847571121667889395 - 0.0152083341036381748962136561197*I         6
0.25, 0.52, 60000155020, 1.48616667643162659196445518806 + 0.273820924642714850832986999730*I         13
0.25, 0.2, 60000155019.4, 1.95034975370338820208145019284 - 0.0120458372314892329478820083194*I       18
0.25, 0.52, 60000155019, 1.56831524396056539837033571308 - 0.101846809734581515839533672987*I         2
0.25, 1, 60000155019.4, 1.42237082706401380855278627492 + 0.0381968735592510024939265704026*I         7
0.25, 0.68, 60000155020, 1.43908625442001305545977999593 + 0.238405468215662012009669573470*I         12
0.25, 0.2, 60000155019.6, 1.68575072430787983091336722222 + 0.415865286835321583885467148208*I        17
0.25, 0.68, 60000155019, 1.51022668159175040572896170684 - 0.0906272465770796498492191477996*I        3
0.25, 1, 60000155019.6, 1.41156878697027044797176385280 + 0.0903631100500691411181646114823*I         8
0.25, 0.84, 60000155020, 1.40020796618818931386448518627 + 0.207963137975477478882040569487*I         11
0.25, 0.2, 60000155019.8, 1.49352027942829916988027302315 + 0.167326995974741298721809057966*I        16
0.25, 0.84, 60000155019, 1.45985920818278585950986955673 - 0.0781854062099169632234540616749*I        4    ep1    m4
0.25, 1, 60000155019.8, 1.39225479140069997770917114519 + 0.138905086712818194196056661057*I          9    ep2    m3
0.25, 1, 60000155020, 1.36591866676164819409376388070 + 0.182318969192973711095494229251*I            10   ep3    m3
0.25, 0.2, 60000155020, 1.76713396321070036311184927439 + 0.322324608577391970707236343351*I          15   ep4    m1

It's definitely concerning that N2(T1,T2) is underperforming. Possibly the horizontal segments take more time to process than we thought, and now there are two of them instead of 1. It will be interesting to dissect in which integrals are we gaining and losing time. Another idea is to lower eps to 1 instead of 99 (those were the contours normally used in the old papers from the 1930s although T used to be much smaller then..), but the stability of the calculation will have to be checked.

rudolph-git-acc commented 5 years ago

@km-git-acc

Great. With some effort I could also fix the sequence in the detailed print, however at 10e21 that would induce a matrix of a couple of gigabytes. Probably the easiest way to 'fix' this is to just print the sequence number (which I have) in front of the output line (no memory impact). An informed user could then easily copy the output and sort them by sequence number. Easy change to make, but will do later.

Have done some further testing on the root counting and found something interesting. Still can't get the N(T2)-N(T1) faster than N(0..T2), however noted that it is the horizontal integral that causes the slow down. Then experimented with different T2s and found that the horizontal integral is very sensitive to the precise location. I guess the slowdown has something to do with the amount of oscillation in zeta/zeta' at that horizontal line. Or better: maybe it is related to the vicinity to a zero on the critical line. Since zeros get denser at higher x, it becomes more sensitive to the choice of T2. This could be an important factor in why Fredrik's script doesn't scale well beyond 10^9, we just have to develop a good 'horizontal integral' T2-location finder (similar to the Barrier). With your 'wrinkled' picture in mind, my gut feel is that we need to 'ride' the peak of a horizontal wrinkle and avoid integrating through the 'troughs'.

Here is an example, do you see the same effect in pari/gp?

DDDs-iMac:debug DDD$ ./rootcountcontour 0 1000 32
Vertical:
cpu/wall(s): 1.84467e+10 0.008
Horizontal:
cpu/wall(s): 1.84467e+10 0.511
[649.00000 +/- 1.57e-6]

DDDs-iMac:debug DDD$ ./rootcountcontour 0 1001 32
Vertical:
cpu/wall(s): 1.84467e+10 0.001
Horizontal:
cpu/wall(s): 1.84467e+10 0.381
[649.00000 +/- 1.32e-6]

DDDs-iMac:debug DDD$ ./rootcountcontour 0 1002 32
Vertical:
cpu/wall(s): 1.84467e+10 0.001
Horizontal:
cpu/wall(s): 1.84467e+10 0.508
[650.00000 +/- 1.56e-6]

DDDs-iMac:debug DDD$ ./rootcountcontour 0 1003 32
Vertical:
cpu/wall(s): 1.84467e+10 0.001
Horizontal:
cpu/wall(s): 1.84467e+10 0.666
[651.00000 +/- 1.56e-6]

DDDs-iMac:debug DDD$ ./rootcountcontour 0 1004 32
Vertical:
cpu/wall(s): 1.84467e+10 0.001
Horizontal:
cpu/wall(s): 1.84467e+10 0.269
[652.00000 +/- 1.31e-6]
km-git-acc commented 5 years ago

@rudolph-git-acc

Yeah, having the sequence number obviates the need to have a sorted output.

Regarding the intregal, gp is throwing bad results for any decent sized T, so haven't been able to replicate the results. But the general idea seems right, in that if zeta starts from a trough, the horizontal segment should be more oscillatory, while if it starts from a peak, the segment should be much smoother, so numerical integration should find the latter easier.

Interestingly, if one is willing is to give up a little accuracy in N(T) estimation, and use analytic estimates (bounds on S(T) come from literature and the bound used below is from a 1941 paper, so must have improved somewhat since then), https://www.aimath.org/WWN/rh/articles/html/71a/

S_bound(T) = 0.137*log(T) + 0.443*log(log(T))+1.588;
N(T) = {main_est = (T/2/Pi)*log(T/2/Pi/exp(1))+7/8; err = S_bound(T); return([floor(main_est-err),ceil(main_est+err)]);}

for(i=3,20,print([i,N(10^i)]))
[3, [645, 653]]
[4, [10139, 10147]]
[5, [138064, 138073]]
[6, [1747140, 1747151]]
[7, [21136120, 21136131]]
[8, [248008018, 248008030]]
[9, [2846548027, 2846548039]]
[10, [32130158308, 32130158321]]
[11, [357948363077, 357948363091]]
[12, [3945951430264, 3945951430279]]
[13, [43124192297094, 43124192297110]]
[14, [467888702914975, 467888702914991]]
[15, [5045354828589526, 5045354828589543]]
[16, [54118226280292469, 54118226280292486]]
[17, [577829042746896145, 577829042746896164]]
[18, [6144758226908675398, 6144758226908675417]]
[19, [65112260263483892764, 65112260263483892784]]
[20, [687769382578810314764, 687769382578810314785]]

In general the actual answer is close to the middle of the range.

While just establishing such a range cannot be directly used for RH verification(the upper bound has to match the sign change count for that), it seems a lot of intricate research has been done to just use the analytic estimates. This paper is a good read towards that. https://www.jstor.org/stable/2006473 Theorem 3.2 is key, and then verification boils down to finding the required number of sign changes in Gram blocks, with the minor restriction that verification can be done for certain T values, not for arbitrary ones. It also now seems the work done by Odlyzko, Gourdon-Demichel and others is an extension of this method.

So the contour integral approach will be useful if we can find a way to contour integrate within the strip as well and not depend on the sign change counts, otherwise we can just rely on the latter.

EDIT: Also, consecutive T ranges share a horizontal segment, and given that the horizontal segments take majority of the time, instead of assigning an entire T range to a client machine, we can just ask the machine to calculate individual horizontal (and vertical) segments and then use the results later as necessary. In fact, even for an individual horizontal segment, we can partition the segment into multiple sub segments and just ask the client machines to calculate those. It seems sub segments in the strip or closer to the strip should take more time, so they should be longer than the remaining ones.

rudolph-git-acc commented 5 years ago

@km-git-acc

Nice! Amazing to see how close the interval actually gets. It is a pity the RH can't be proved this way.

Did some further experiments and found that when making the eps 999, the vertical integral becomes so fast that it easily reaches T=9e21 in a fraction of a second. So, we won't need to split up work on the vertical at this esp. The horizontal obviously gets longer as well, however it does appear to be scalable. What I noted however is that the closer you get to 1/2, the tougher the calculations become (like the 'close to t=0'-case in the Tloop). The range from 1000 back to 100 seems doable, but even going a small step from 100 to 99.9 at T=10e11 takes ages. I probably should try to put T at a 'peak' value in the middle between two zeros and see if that accelerates (we actually need the opposite of a Lehmer pair for finding this spot).

The increasingly tight 'jungle' close to the critical line seems to become exponentially more difficult to navigate through for the horizontal integral. The only way out of this is to follow the path over the tops of the adjacent mountains.

EDIT: Tried to start at the mid point between two consecutive zeros, but did not see a speed up. I recall there was a vestige of the Euler product still visible in the critical strip. Could this help to find a relatively high value of zeta(s) at high Ts in a similar manner as we did for the Barrier location?

km-git-acc commented 5 years ago

@rudolph-git-acc Yeah, the location finder should be fully reusable. Only that we are now concerned about the behavior at y=0 instead of y=1 so that may have to be tweaked. Also the T value will be half of what we get from the script. The peak heights should be large at y=0 (say 50 or even 70) instead of the 4.x we see at y=1. Apart from that, the current location finder may often give a sharp mountain (width 1/2) since we are using a barrier width of 1. The barrier width may have to be increased to search for tall wide mountains. According to the GUE hypothesis, wide gaps between zeros (as a % of the average spacing) should always exist even if relatively rare.

Taken to the logical extreme, at eps=infinity, there should be no argument change in the vertical segment (removing the need to calculate that), and only the horizontal segments count.

rudolph-git-acc commented 5 years ago

@km-git-acc

The 2x10e20 T-loop has successfully completed in 2 days and a bit :-) Will kick off the 9x10e21 T-loop tomorrow evening. Have uploaded the T-loops completed so far.

EDIT: The adjusted Barrier optimiser definitely works. I found for instance

60000089065.5, 19.01604617, 85.19779040, 62.64130793, 4.098823856, 5.401346214, 4.931624638

which gives: T=30000044532.75 and this indeed resides nicely in the middle of this "anti-Lehmer gap" of size ~ 1:

Root nr         Im(Root)
101636120190    30000044532.39737577611720298124189639822287
101636120191    30000044533.36681062910682734301970721676805

Also measured the impact on timings and the effect is clearly there. Testing only the horizontal integral from 1 + Te i ... 2 + Te i gives:

DDDs-iMac:debug DDD$ ./rootcountcontour 30000054532.75 48 (+10000)
Horizontal:
cpu/wall(s): 1.84467e+10 231.312
[101636155657.29 +/- 7.09e-3]

DDDs-iMac:debug DDD$ ./rootcountcontour 30000044532.75 48 (optimised location)
Horizontal:
cpu/wall(s): 1.84467e+10 201.665
[101636120186.79 +/- 7.50e-3]

DDDs-iMac:debug DDD$ ./rootcountcontour 30000034532.75 48 (-10000)
Horizontal:
cpu/wall(s): 1.84467e+10 280.224
[101636084716.85 +/- 2.76e-3]

And by using the 'skew' to the right (the 85.vvv and 62.vvv numbers), I got an even better result:

DDDs-iMac:debug DDD$ ./rootcountcontour 30000044533 48
Horizontal:
cpu/wall(s): 1.84467e+10 127.75
[101636120190.71 +/- 7.02e-3]

Still feel there is something strange going on with the integral itself. Sometimes it is extremely fast, e.g. from 1000 + Te i to 200 + Te i, but then it completely stalls from 1000 + Te i to 113 + Te i (it still is extremely fast at 114) and then when I try 120 + Te i to 102 + Te i it is extremely fast again. Something else seems to be at play here and need to do some further digging.

EDIT: Did get some more timings on the horizontal run at 30000044533 with 48 digits and x + Te i. The data looks pretty counter intuitive:

x   
0.5 .. 0.75 : 253 secs
0.75 .. 1: 125 secs
1 .. 2 : 102 secs
3 .. 4: 32 secs
approximately the same time for all +1 steps until:
14..15: takes ages
all + 1 steps from 14..106 take ages (only managed to complete 99 to 100 and it took ~17000 secs).
106..1000: +1 steps in 0.001 secs (but note that when I try integrating the range 106-110 it takes ages again)

What could it be that causes the horizontal line to become so complex that integrating 4 steps of 1 from 106-110 take 0.004 secs, however integrating one step from 106 to 110 takes forever? Also tried tried smaller steps at lower x, e.g. 15 .. 15.001 however that still takes ages.

So, the main performance issues are not induced by the 'jungle' near the critical line, but seem to reside in the range (14..103). There seems to be a delicate trade-off to be made between the vertical and horizontal integrals, but not sure the 'amount of complexity tore integrated' could actually be optimised or that it just stays stable for all choices. We clearly need to cut the horizontal line into smaller pieces, but which logic should we apply? Thoughts?

km-git-acc commented 5 years ago

@rudolph-git-acc Interesting that the result at 30000044533 is exact but at ...2.75 is slightly off.

The behavior in 14..103 is indeed bizarre since there should be nothing different in this region compared to other such regions. The key thing about numerical integration in Arb different from other libraries is the adaptive sub splitting which gives right results but can lead to pathological splitting if the algorithm is not satisfied with what it sees. Maybe plotting that region can lead to some clues. Or a blunt approach could be to restrict eps below 13 and depend more on the vertical segment.

Also, for the 9x10e21 tloop, around 50% of the jobs are complete.

rudolph-git-acc commented 5 years ago

@km-git-acc

Have started the run of 9x10e21. Will let is 'calmly' do it work during the next 7 days :)

The issue doesn't seem to originate from the integral, but clearly has to do something with the evaluation of zeta'(s)/zeta(s). When I try to print individual values of the latter, it does stall at a certain point. This is the code used to calculate zeta'(s)/zeta(s) (it uses a Taylor expansion of 2 terms). Maybe things go wrong when we are getting close to a zero of zeta'(s) (these all seem to reside on the right of the critical strip and grow forever.

/* f(z) = zeta'(z) / zeta(z) */
int
f_zeta_frac(acb_ptr res, const acb_t z, void * param, slong order, slong prec)
{
    acb_struct t[2];

    if (order > 1)
        flint_abort();  /* Would be needed for Taylor method. */

    acb_init(t);
    acb_init(t + 1);

    acb_dirichlet_zeta_jet(t, z, 0, 2, prec);
    acb_div(res, t + 1, t, prec);

    acb_clear(t);
    acb_clear(t + 1);

    return 0;
}

What is even more surprising is that it stalls at exactly 14.80000000. Nothing special in the data. This almost feels like a fixed parameter has been used somewhere:

14.7999999900 : [-2.1917533947e-5 +/- 4.93e-16] + [1.071106271e-5 +/- 8.36e-15]*I
14.7999999910 : [-2.1917533932e-5 +/- 2.66e-16] + [1.07110627e-5 +/- 1.13e-14]*I
14.7999999920 : [-2.1917533917e-5 +/- 2.03e-16] + [1.07110627e-5 +/- 1.04e-14]*I
14.7999999930 : [-2.1917533902e-5 +/- 4.30e-16] + [1.07110627e-5 +/- 2.10e-14]*I
14.7999999940 : [-2.1917533886e-5 +/- 5.82e-16] + [1.07110627e-5 +/- 2.46e-14]*I
14.7999999950 : [-2.1917533871e-5 +/- 3.55e-16] + [1.07110627e-5 +/- 3.17e-14]*I
14.7999999960 : [-2.1917533856e-5 +/- 1.27e-16] + [1.07110627e-5 +/- 4.23e-14]*I
14.7999999970 : [-2.1917533841e-5 +/- 3.42e-16] + [1.07110627e-5 +/- 4.94e-14]*I
14.7999999980 : [-2.1917533826e-5 +/- 5.70e-16] + [1.071106265e-5 +/- 7.97e-15]*I
14.7999999990 : [-2.1917533810e-5 +/- 4.43e-16] + [1.07110626e-5 +/- 5.09e-14]*I
14.8000000000 : [-2.1917533795e-5 +/- 2.15e-16] + [1.07110626e-5 +/- 4.38e-14]*I
km-git-acc commented 5 years ago

@rudolph-git-acc

Have been travelling a bit myself so somewhat slow in responding. Contour integrals can indeed misbehave or go sluggish if there is a zero on the contour or close to it. In which case we would have to avoid regions where there are too many of them. If it turns out to be Arb specific behavior, Fredrik may be able to throw more light on this.

EDIT: Also, around 80% of the jobs for the tloop seem to be complete.

rudolph-git-acc commented 5 years ago

@km-git-acc

Have tried a few other ARB functions and the Riemann-Siegel approximation nicely flow through the 14.8, however then doesn't integrate well... Will collect some more data and ask Fredrik directly about the best approach for this.

Progress on Boinc at 80% is impressive! Guess it is almost done by now?

km-git-acc commented 5 years ago

@rudolph-git-acc

Its at 87% now, and I think the tail behavior has started. Not working on my home machine right now, but I will create a list of the pending jobs by tomorrow, and then we can tackle those manually like earlier. Can't believe we are now quite close to the 0.1 mark!!

Riemann Siegel may turn out to be quite slow at large T values since the number of summands will be of the order of sqrt(T). Within integration this behavior will get amplified. I will check a bit more on zeta mollifiers to see if something is workable.

km-git-acc commented 5 years ago

@rudolph-git-acc These are the t ranges still left (948 of them)

0.021304    0.021436
0.020101    0.020196
0.01992 0.02001
0.017052    0.017092
0.017013    0.017052
0.016975    0.017013
0.016937    0.016975
0.016899    0.016937
0.016862    0.016899
0.016824    0.016862
0.016787    0.016824
0.01675 0.016787
0.016715    0.01675
0.013288    0.013302
0.013194    0.013207
0.013142    0.013155
0.012976    0.012988
0.012721    0.012733
0.01034 0.010346
0.010279    0.010285
0.010255    0.010261
0.010219    0.010225
0.0098135   0.0098187
0.0096997   0.0097048
0.0091665   0.0091708
0.0088908   0.0088949
0.0088866   0.0088908
0.0087998   0.0088039
0.0086943   0.0086979
0.0086688   0.0086725
0.0086435   0.0086471
0.0086074   0.008611
0.0085039   0.0085074
0.0084933   0.0084968
0.0084474   0.0084509
0.0084194   0.0084229
0.0083324   0.0083358
0.0083289   0.0083324
0.0083048   0.0083082
0.0082875   0.008291
0.0081111   0.0081144
0.0077726   0.0077755
0.0077471   0.00775
0.0076936   0.0076964
0.0076432   0.007646
0.0075932   0.007596
0.0075134   0.0075162
0.0074643   0.0074671
0.0074372   0.0074399
0.0073324   0.0073351
0.0072003   0.0072029
0.0070835   0.0070861
0.0070681   0.0070706
0.0070527   0.0070552
0.0070424   0.007045
0.0070347   0.0070373
0.0069752   0.0069774
0.00692 0.0069222
0.0068281   0.0068303
0.0068021   0.0068043
0.0066841   0.0066862
0.0066628   0.0066649
0.0066565   0.0066586
0.0066374   0.0066395
0.0065973   0.0065994
0.006545    0.0065471
0.0065387   0.0065408
0.0063739   0.0063759
0.0063718   0.0063739
0.0063657   0.0063678
0.0062207   0.0062227
0.0062128   0.0062148
0.0061949   0.0061969
0.0060436   0.0060456
0.0060049   0.0060068
0.0059991   0.006001
0.0059894   0.0059914
0.0059817   0.0059836
0.0059779   0.0059798
0.0059452   0.0059471
0.0059395   0.0059414
0.0059356   0.0059375
0.0059318   0.0059337
0.0059299   0.0059318
0.005928    0.0059299
0.0059261   0.005928
0.0059242   0.0059261
0.0058714   0.005873
0.0057729   0.0057745
0.0057218   0.0057234
0.0056743   0.0056758
0.0056537   0.0056553
0.0056051   0.0056066
0.0055288   0.0055303
0.0055272   0.0055288
0.0055226   0.0055241
0.0054289   0.0054304
0.0053303   0.0053318
0.0052599   0.0052614
0.005257    0.0052584
0.0052555   0.005257
0.005254    0.0052555
0.0052525   0.005254
0.005245    0.0052465
0.0052391   0.0052406
0.0051726   0.005174
0.0051711   0.0051726
0.0051696   0.0051711
0.0051681   0.0051696
0.0051417   0.0051432
0.0051168   0.0051183
0.0051154   0.0051168
0.0051125   0.0051139
0.005111    0.0051125
0.0051095   0.005111
0.0051066   0.0051081
0.0051052   0.0051066
0.0051037   0.0051052
0.0051023   0.0051037
0.0050761   0.0050775
0.0050746   0.0050761
0.0050732   0.0050746
0.0050717   0.0050732
0.0050703   0.0050717
0.0050688   0.0050703
0.0050674   0.0050688
0.0050096   0.005011
0.0050082   0.0050096
0.0050067   0.0050082
0.0049026   0.004904
0.0049011   0.0049026
0.0048828   0.0048842
0.0048715   0.0048729
0.0048686   0.0048701
0.0048658   0.0048672
0.0048644   0.0048658
0.0048616   0.004863
0.0048602   0.0048616
0.0048363   0.0048377
0.0047748   0.0047762
0.0047291   0.0047305
0.0046387   0.0046398
0.0046262   0.0046273
0.004616    0.0046171
0.0046137   0.0046148
0.0046126   0.0046137
0.0046058   0.0046069
0.0045956   0.0045967
0.0045945   0.0045956
0.0045922   0.0045933
0.00459 0.0045911
0.0045866   0.0045877
0.0045843   0.0045854
0.0045809   0.0045821
0.0045798   0.0045809
0.0045764   0.0045775
0.0045753   0.0045764
0.0045719   0.004573
0.0045663   0.0045674
0.0045416   0.0045427
0.0045293   0.0045304
0.0045281   0.0045293
0.0045203   0.0045214
0.0045158   0.0045169
0.0045125   0.0045136
0.0045102   0.0045114
0.0045091   0.0045102
0.0045069   0.004508
0.0045058   0.0045069
0.0045047   0.0045058
0.0045036   0.0045047
0.0045002   0.0045013
0.0044991   0.0045002
0.0044924   0.0044935
0.0044913   0.0044924
0.0044868   0.004488
0.0044857   0.0044868
0.0044846   0.0044857
0.0044835   0.0044846
0.0044779   0.0044791
0.0044746   0.0044757
0.0044713   0.0044724
0.0044702   0.0044713
0.0044691   0.0044702
0.004468    0.0044691
0.0044668   0.004468
0.0044657   0.0044668
0.0044646   0.0044657
0.0044635   0.0044646
0.0044602   0.0044613
0.0044591   0.0044602
0.0044569   0.004458
0.0044558   0.0044569
0.0044546   0.0044558
0.0044524   0.0044535
0.0044513   0.0044524
0.0044502   0.0044513
0.0044491   0.0044502
0.004448    0.0044491
0.0044469   0.004448
0.0044458   0.0044469
0.0044447   0.0044458
0.0044436   0.0044447
0.0044425   0.0044436
0.0044403   0.0044414
0.0044392   0.0044403
0.0044381   0.0044392
0.0044182   0.0044193
0.0044072   0.0044083
0.004405    0.0044061
0.0043984   0.0043995
0.0043764   0.0043775
0.0043732   0.0043743
0.0043611   0.0043622
0.0043535   0.0043546
0.0043437   0.0043447
0.0043404   0.0043415
0.0043132   0.0043143
0.004311    0.0043121
0.0042937   0.0042948
0.0042656   0.0042667
0.0042634   0.0042645
0.0042624   0.0042634
0.0042322   0.0042333
0.004229    0.0042301
0.0042269   0.004228
0.0042215   0.0042226
0.0042172   0.0042183
0.0042151   0.0042162
0.0041873   0.0041884
0.0041799   0.0041809
0.0041682   0.0041692
0.004166    0.0041671
0.0041533   0.0041543
0.0041469   0.004148
0.0041342   0.0041353
0.0041226   0.0041236
0.0041173   0.0041183
0.0041089   0.0041099
0.0041025   0.0041036
0.0041015   0.0041025
0.0041004   0.0041015
0.0040878   0.0040888
0.0040857   0.0040867
0.0040783   0.0040794
0.0040752   0.0040762
0.0040386   0.0040396
0.0040365   0.0040375
0.0040333   0.0040344
0.0040312   0.0040323
0.0040271   0.0040281
0.004025    0.004026
0.0040135   0.0040146
0.0040104   0.0040115
0.0040083   0.0040094
0.0040073   0.0040083
0.0040063   0.0040073
0.0040011   0.0040021
0.004   0.0040011
0.003999    0.004
0.003998    0.003999
0.0039897   0.0039907
0.0039855   0.0039865
0.0039845   0.0039855
0.0039834   0.0039845
0.0039824   0.0039834
0.0039814   0.0039824
0.0039803   0.0039814
0.0039793   0.0039803
0.0039783   0.0039793
0.0039741   0.0039752
0.0039731   0.0039741
0.0039721   0.0039731
0.0039617   0.0039628
0.0039597   0.0039607
0.0039586   0.0039597
0.0039576   0.0039586
0.0039555   0.0039566
0.0039545   0.0039555
0.0039535   0.0039545
0.0039524   0.0039535
0.0039504   0.0039514
0.0039493   0.0039504
0.0039483   0.0039493
0.0039462   0.0039473
0.0039452   0.0039462
0.0039442   0.0039452
0.0039432   0.0039442
0.0039421   0.0039432
0.003939    0.0039401
0.003938    0.003939
0.003937    0.003938
0.0039339   0.0039349
0.0039329   0.0039339
0.0039216   0.0039226
0.0038806   0.0038816
0.0038775   0.0038786
0.0038765   0.0038775
0.0038745   0.0038755
0.0038612   0.0038623
0.00385 0.0038511
0.0037762   0.0037772
0.003534    0.003535
0.0035078   0.0035088
0.0034991   0.0035001
0.0034174   0.0034184
0.0032905   0.0032914
0.0032886   0.0032896
0.0032167   0.0032176
0.0030229   0.0030236
0.0030178   0.0030185
0.0030171   0.0030178
0.0029868   0.0029875
0.0029746   0.0029753
0.0029724   0.0029731
0.0029302   0.0029309
0.0029039   0.0029046
0.0029017   0.0029024
0.0028493   0.00285
0.0028381   0.0028388
0.0028367   0.0028374
0.0028345   0.0028352
0.0028331   0.0028338
0.0028324   0.0028331
0.002831    0.0028317
0.0028296   0.0028303
0.0028289   0.0028296
0.0028282   0.0028289
0.0028268   0.0028275
0.0028261   0.0028268
0.0028099   0.0028107
0.0028057   0.0028064
0.0028022   0.0028029
0.0028015   0.0028022
0.0027987   0.0027994
0.002798    0.0027987
0.0027973   0.002798
0.0027959   0.0027966
0.0027952   0.0027959
0.0027945   0.0027952
0.0027938   0.0027945
0.0027931   0.0027938
0.0027924   0.0027931
0.0027917   0.0027924
0.002791    0.0027917
0.0027903   0.002791
0.0027896   0.0027903
0.0027889   0.0027896
0.0027882   0.0027889
0.0027875   0.0027882
0.0027861   0.0027868
0.0027854   0.0027861
0.0027847   0.0027854
0.002784    0.0027847
0.0027833   0.002784
0.0027826   0.0027833
0.0027819   0.0027826
0.0027813   0.0027819
0.0027799   0.0027806
0.0027764   0.0027771
0.0027757   0.0027764
0.002775    0.0027757
0.0027743   0.002775
0.0027729   0.0027736
0.0027722   0.0027729
0.0027715   0.0027722
0.0027687   0.0027694
0.0027513   0.002752
0.002727    0.0027277
0.0027235   0.0027242
0.0026972   0.0026979
0.0026135   0.0026142
0.002572    0.0025727
0.0025714   0.002572
0.0025707   0.0025714
0.0025632   0.0025639
0.0025544   0.0025551
0.0025531   0.0025538
0.0025524   0.0025531
0.002547    0.0025477
0.0025463   0.002547
0.002518    0.0025187
0.0025133   0.002514
0.0025126   0.0025133
0.0025072   0.0025079
0.0024965   0.0024972
0.0024878   0.0024885
0.0024864   0.0024871
0.0024731   0.0024737
0.002445    0.0024457
0.0024377   0.0024384
0.0024145   0.0024151
0.0024006   0.0024012
0.0023959   0.0023966
0.0023873   0.002388
0.0023867   0.0023873
0.002386    0.0023867
0.0023788   0.0023794
0.0023735   0.0023741
0.0023715   0.0023722
0.0023702   0.0023708
0.0023623   0.0023629
0.0023557   0.0023564
0.002334    0.0023347
0.002315    0.0023157
0.0023137   0.0023144
0.0023085   0.0023092
0.0022798   0.0022805
0.0022792   0.0022798
0.0022785   0.0022792
0.0022636   0.0022642
0.0022584   0.002259
0.0022526   0.0022532
0.0022306   0.0022312
0.0022061   0.0022067
0.0021951   0.0021958
0.002181    0.0021816
0.0021701   0.0021708
0.0021682   0.0021688
0.0021676   0.0021682
0.002156    0.0021567
0.0021484   0.002149
0.0021133   0.002114
0.0020779   0.0020785
0.0020646   0.0020652
0.0020275   0.0020281
0.0019662   0.0019669
0.001947    0.0019476
0.0019451   0.0019457
0.0019426   0.0019433
0.0019414   0.001942
0.0019234   0.0019241
0.0019154   0.001916
0.0019092   0.0019099
0.0019031   0.0019037
0.0018877   0.0018883
0.0018791   0.0018797
0.0018692   0.0018698
0.0018668   0.0018674
0.0018662   0.0018668
0.0018472   0.0018478
0.0018441   0.0018447
0.0018405   0.0018411
0.0018295   0.0018301
0.0018161   0.0018167
0.0017911   0.0017917
0.0017488   0.0017494
0.0017482   0.0017488
0.0017403   0.0017409
0.0017385   0.0017391
0.0017103   0.0017109
0.0016875   0.0016881
0.0016809   0.0016815
0.001678    0.0016786
0.0016768   0.0016774
0.0016529   0.0016535
0.0016464   0.001647
0.001609    0.0016096
0.0015901   0.0015907
0.0015842   0.0015848
0.0015778   0.0015783
0.0015742   0.0015748
0.0015736   0.0015742
0.0015695   0.0015701
0.0015631   0.0015636
0.0015613   0.0015619
0.0015437   0.0015443
0.0015367   0.0015373
0.0015343   0.0015349
0.0015314   0.001532
0.0015302   0.0015308
0.0015291   0.0015296
0.0015197   0.0015203
0.001518    0.0015185
0.0015139   0.0015145
0.0014947   0.0014952
0.0014819   0.0014824
0.0014604   0.001461
0.0014517   0.0014523
0.0014494   0.00145
0.0014488   0.0014494
0.0014483   0.0014488
0.0014477   0.0014483
0.0014396   0.0014402
0.0014315   0.0014321
0.0014096   0.0014102
0.001393    0.0013936
0.0013775   0.0013781
0.0013575   0.0013581
0.0013205   0.0013211
0.0013126   0.0013132
0.0013103   0.0013109
0.0013058   0.0013064
0.0013052   0.0013058
0.0012979   0.0012984
0.0012578   0.0012584
0.0012528   0.0012533
0.0012365   0.001237
0.0011501   0.0011506
0.0010444   0.0010448
0.0010432   0.0010436
0.00091867  0.00091908
0.00090341  0.00090381
0.00083117  0.00083157
0.00082641  0.00082681
0.00080937  0.00080977
0.00080897  0.00080937
0.00080699  0.00080739
0.00079988  0.00080027
0.00078487  0.00078526
0.0007636   0.00076399
0.00074867  0.00074907
0.00074789  0.00074828
0.00073299  0.00073338
0.00072399  0.00072438
0.00072321  0.0007236
0.00072126  0.00072165
0.00071774  0.00071813
0.00071539  0.00071579
0.00071305  0.00071344
0.00070993  0.00071032
0.00070447  0.00070486
0.00070252  0.00070291
0.00070213  0.00070252
0.00070174  0.00070213
0.00069862  0.00069901
0.00069005  0.00069044
0.00068811  0.0006885
0.00068577  0.00068616
0.00068188  0.00068227
0.00068072  0.00068111
0.00067489  0.00067528
0.0006745   0.00067489
0.00066868  0.00066907
0.00066558  0.00066596
0.00066519  0.00066558
0.00066092  0.00066131
0.00066054  0.00066092
0.0006586   0.00065899
0.00065821  0.0006586
0.00065705  0.00065744
0.00065666  0.00065705
0.00065628  0.00065666
0.00065434  0.00065473
0.00064737  0.00064776
0.00064505  0.00064544
0.00064428  0.00064467
0.00064389  0.00064428
0.00064196  0.00064235
0.00063809  0.00063848
0.00063385  0.00063423
0.00063346  0.00063385
0.00063153  0.00063192
0.00063114  0.00063153
0.00063037  0.00063076
0.0006296   0.00062999
0.00062729  0.00062767
0.00062266  0.00062304
0.00062189  0.00062227
0.00061688  0.00061726
0.00061187  0.00061226
0.00061149  0.00061187
0.00060957  0.00060995
0.0006088   0.00060918
0.00060264  0.00060303
0.00059381  0.00059419
0.00059151  0.00059189
0.00058767  0.00058805
0.00058652  0.0005869
0.00058575  0.00058613
0.00057962  0.00058
0.00057847  0.00057885
0.0005777   0.00057809
0.00057694  0.00057732
0.00057617  0.00057655
0.00057541  0.00057579
0.00057426  0.00057464
0.00057311  0.00057349
0.00057273  0.00057311
0.00057158  0.00057196
0.00057119  0.00057158
0.00056928  0.00056966
0.0005689   0.00056928
0.00056852  0.0005689
0.00056775  0.00056813
0.00056737  0.00056775
0.0005666   0.00056699
0.00056622  0.0005666
0.00056546  0.00056584
0.00056469  0.00056508
0.00056431  0.00056469
0.00056355  0.00056393
0.00056278  0.00056316
0.00056087  0.00056125
0.00056011  0.00056049
0.00055934  0.00055972
0.00055896  0.00055934
0.00055858  0.00055896
0.00055781  0.0005582
0.00055705  0.00055743
0.00055629  0.00055667
0.00055552  0.00055591
0.00055476  0.00055514
0.00055438  0.00055476
0.00055361  0.000554
0.00055323  0.00055361
0.00055247  0.00055285
0.00055171  0.00055209
0.00055132  0.00055171
0.00054904  0.00054942
0.00054865  0.00054904
0.00054789  0.00054827
0.00054751  0.00054789
0.00054713  0.00054751
0.00054675  0.00054713
0.0005456   0.00054598
0.00054522  0.0005456
0.00054446  0.00054484
0.00054408  0.00054446
0.0005437   0.00054408
0.00054332  0.0005437
0.00054294  0.00054332
0.00054255  0.00054294
0.00054217  0.00054255
0.00054179  0.00054217
0.00054141  0.00054179
0.00054065  0.00054103
0.00053989  0.00054027
0.00053912  0.00053951
0.00053874  0.00053912
0.00053836  0.00053874
0.00053798  0.00053836
0.0005376   0.00053798
0.00053722  0.0005376
0.00053684  0.00053722
0.00053646  0.00053684
0.00053608  0.00053646
0.0005357   0.00053608
0.00053532  0.0005357
0.00053494  0.00053532
0.00053456  0.00053494
0.00053417  0.00053456
0.00053379  0.00053417
0.00053341  0.00053379
0.00053303  0.00053341
0.00053265  0.00053303
0.00053227  0.00053265
0.00053189  0.00053227
0.00053151  0.00053189
0.00053113  0.00053151
0.00053075  0.00053113
0.00052999  0.00053037
0.00052961  0.00052999
0.00052923  0.00052961
0.00052885  0.00052923
0.00052847  0.00052885
0.00052809  0.00052847
0.00052771  0.00052809
0.00052733  0.00052771
0.00052695  0.00052733
0.00052657  0.00052695
0.00052619  0.00052657
0.00052581  0.00052619
0.00052543  0.00052581
0.00052504  0.00052543
0.00052466  0.00052504
0.00052428  0.00052466
0.0005239   0.00052428
0.00052352  0.0005239
0.00052314  0.00052352
0.00052276  0.00052314
0.00052238  0.00052276
0.000522    0.00052238
0.00052162  0.000522
0.00052124  0.00052162
0.00052086  0.00052124
0.00052048  0.00052086
0.00051934  0.00051972
0.00051821  0.00051859
0.00051783  0.00051821
0.00051745  0.00051783
0.00051631  0.00051669
0.00051593  0.00051631
0.00051555  0.00051593
0.00051517  0.00051555
0.00051479  0.00051517
0.00051441  0.00051479
0.00051403  0.00051441
0.00051327  0.00051365
0.00051289  0.00051327
0.00051251  0.00051289
0.00051213  0.00051251
0.00051175  0.00051213
0.00051137  0.00051175
0.00051099  0.00051137
0.00051061  0.00051099
0.00051023  0.00051061
0.00050986  0.00051023
0.00050948  0.00050986
0.0005091   0.00050948
0.00050872  0.0005091
0.00050834  0.00050872
0.00050796  0.00050834
0.00050758  0.00050796
0.0005072   0.00050758
0.00050682  0.0005072
0.00050644  0.00050682
0.00050606  0.00050644
0.00050568  0.00050606
0.0005053   0.00050568
0.00050493  0.0005053
0.00050455  0.00050493
0.00050417  0.00050455
0.00050379  0.00050417
0.00050341  0.00050379
0.00050303  0.00050341
0.00050265  0.00050303
0.00050227  0.00050265
0.00050189  0.00050227
0.00050151  0.00050189
0.00050114  0.00050151
0.00050076  0.00050114
0.00050038  0.00050076
0.0005  0.00050038
0.00049962  0.0005
0.0004947   0.00049508
0.00049129  0.00049167
0.0004894   0.00048978
0.00048751  0.00048789
0.00048638  0.00048675
0.0004826   0.00048297
0.00048071  0.00048108
0.0004792   0.00047957
0.00047882  0.0004792
0.00047844  0.00047882
0.00047806  0.00047844
0.00047769  0.00047806
0.00047693  0.00047731
0.00047655  0.00047693
0.00047391  0.00047429
0.00047316  0.00047353
0.00047278  0.00047316
0.00047014  0.00047051
0.00046976  0.00047014
0.0004626   0.00046297
0.00046222  0.0004626
0.00046109  0.00046147
0.00045921  0.00045958
0.00045808  0.00045845
0.00045657  0.00045695
0.00045619  0.00045657
0.00045582  0.00045619
0.00045544  0.00045582
0.00045205  0.00045243
0.0004513   0.00045168
0.00045093  0.0004513
0.00045055  0.00045093
0.00045017  0.00045055
0.0004498   0.00045017
0.00044942  0.0004498
0.00044905  0.00044942
0.00044754  0.00044792
0.00044566  0.00044604
0.00043852  0.0004389
0.00043552  0.0004359
0.00042952  0.00042989
0.00042802  0.00042839
0.00042614  0.00042652
0.00042127  0.00042165
0.0004209   0.00042127
0.0004164   0.00041678
0.00041491  0.00041528
0.00041378  0.00041416
0.00041304  0.00041341
0.00041004  0.00041042
0.00040481  0.00040518
0.00040406  0.00040443
0.00040033  0.0004007
0.00039883  0.00039921
0.00039398  0.00039435
0.00039323  0.00039361
0.00039099  0.00039137
0.00039025  0.00039062
0.00038838  0.00038876
0.00038428  0.00038466
0.00038056  0.00038093
0.00037907  0.00037944
0.00037832  0.0003787
0.00037386  0.00037423
0.00037349  0.00037386
0.00037125  0.00037163
0.00036977  0.00037014
0.0003694   0.00036977
0.00036568  0.00036605
0.00036456  0.00036493
0.00036085  0.00036122
0.00036011  0.00036048
0.00035936  0.00035973
0.00035899  0.00035936
0.00035751  0.00035788
0.00035417  0.00035454
0.00035342  0.0003538
0.00035305  0.00035342
0.00035046  0.00035083
0.00034972  0.00035009
0.00034934  0.00034972
0.0003486   0.00034897
0.00034823  0.0003486
0.00034712  0.00034749
0.00034675  0.00034712
0.00034638  0.00034675
0.00033601  0.00033638
0.00033453  0.0003349
0.00033379  0.00033416
0.00032972  0.00033009
0.00032344  0.00032381
0.0003227   0.00032307
0.00031864  0.00031901
0.00031642  0.00031679
0.00031568  0.00031605
0.00031458  0.00031495
0.00031273  0.0003131
0.00030978  0.00031015
0.00029836  0.00029873
0.00029799  0.00029836
0.00029763  0.00029799
0.00029615  0.00029652
0.00028917  0.00028953
0.00028806  0.00028843
0.00028696  0.00028733
0.00028586  0.00028623
0.00028549  0.00028586
0.00028512  0.00028549
0.00028072  0.00028108
0.00027778  0.00027815
0.00027705  0.00027741
0.00027668  0.00027705
0.00027594  0.00027631
0.00027411  0.00027448
0.00027191  0.00027228
0.00027118  0.00027154
0.00026714  0.00026751
0.00026458  0.00026495
0.00026348  0.00026385
0.00026201  0.00026238
0.00025726  0.00025762
0.00025543  0.00025579
0.00024957  0.00024994
0.00024848  0.00024884
0.00024811  0.00024848
0.00024775  0.00024811
0.00024629  0.00024665
0.00024592  0.00024629
0.00023971  0.00024008
0.00023862  0.00023898
0.00022986  0.00023023
0.0002295   0.00022986
0.00022877  0.00022913
0.00021748  0.00021784
0.00021239  0.00021275
0.00021021  0.00021057
0.00020948  0.00020984
0.00020911  0.00020948
0.00020657  0.00020693
0.00020584  0.00020621
0.00020439  0.00020475
0.00020367  0.00020403
0.00020185  0.00020221
0.00020003  0.0002004
0.00019786  0.00019822
0.00019641  0.00019677
0.00019459  0.00019495
0.0001906   0.00019097
0.0001877   0.00018807
0.00018698  0.00018734
0.00018662  0.00018698
0.00018372  0.00018408
0.00018336  0.00018372
0.00018263  0.00018299
0.00018227  0.00018263
0.00018046  0.00018082
0.00017503  0.00017539
0.00017431  0.00017467
0.00016672  0.00016708
0.00016419  0.00016455
0.00016311  0.00016347
0.00015986  0.00016022
0.0001595   0.00015986
0.00015877  0.00015914
0.00015517  0.00015553
0.00015228  0.00015264
0.0001512   0.00015156
0.00014868  0.00014904
0.0001476   0.00014796
0.00014508  0.00014544
0.00014327  0.00014363
0.00014111  0.00014147
0.00013967  0.00014003
0.00013788  0.00013824
0.00013428  0.00013464
0.00013284  0.0001332
0.00013104  0.0001314
0.00013068  0.00013104
0.00013032  0.00013068
0.00012996  0.00013032
0.00012961  0.00012996
0.00012709  0.00012745
0.00012529  0.00012565
0.00012493  0.00012529
0.00012278  0.00012314
0.00012242  0.00012278
0.00012206  0.00012242
0.0001217   0.00012206
0.00012063  0.00012099
0.0001174   0.00011776
0.0001156   0.00011596
0.00011489  0.00011524
0.00011453  0.00011489
0.00011381  0.00011417
0.00011273  0.00011309
0.00011058  0.00011094
0.00010879  0.00010915
0.00010843  0.00010879
0.00010664  0.000107
0.00010593  0.00010628
0.00010414  0.00010449
0.00010306  0.00010342
0.000099487 0.000099845
0.000098056 0.000098413
0.000097698 0.000098056
0.000096982 0.00009734
0.000095194 0.000095552
0.000094837 0.000095194
0.000094121 0.000094479
0.000093764 0.000094121
0.000093406 0.000093764
0.000093049 0.000093406
0.000092691 0.000093049
0.000092334 0.000092691
0.000091976 0.000092334
0.000091619 0.000091976
0.000090547 0.000090904
0.000090189 0.000090547
0.000089832 0.000090189
0.000089475 0.000089832
0.000089117 0.000089475
0.000081975 0.000082332
0.000081618 0.000081975

Since these are quite a lot, I am submitting the first 700 of them again and multiple clients will be replicating them. Of the remaining 248, we can tackle the top half and bottom half (I will start with the bottom half). On the other hand, it's also possible the existing tloop process on your machine may finish sooner. How far has that reached?

rudolph-git-acc commented 5 years ago

@km-git-acc

Great, that looks like very good progress. I am still in the US and flying back tomorrow. I do know that the Tloop is still running in my garage (I failed to set up remote access over the web due to router issues, so unfortunately can't see the exact progress from here). I will share the status as soon as I'm home.

mariotrevi commented 5 years ago

@KM @rudolph ,

On 10/14/2018 10:21 AM, KM wrote:

@rudolph-git-acc https://github.com/rudolph-git-acc

Yeah, having the sequence number obviates the need to have a sorted output.

Regarding the intregal, gp is throwing bad results for any decent sized T, so haven't been able to replicate the results. But the general idea seems right, in that if zeta starts from a trough, the horizontal segment should be more oscillatory, while if it starts from a peak, the segment should be much smoother, so numerical integration should find the latter easier.

Interestingly, if one is willing is to give up a little accuracy in N(T) estimation, and use analytic estimates (bounds on S(T) come from literature and the bound used below is from a 1941 paper, so must have improved somewhat since then), https://www.aimath.org/WWN/rh/articles/html/71a/

|S_bound(T) = 0.137log(T) + 0.443log(log(T))+1.588; N(T) = {main_est = (T/2/Pi)*log(T/2/Pi/exp(1))+7/8; err = S_bound(T); return([floor(main_est-err),ceil(main_est+err)]);} for(i=3,20,print([i,N(10^i)])) | |[3, [645, 653]] [4, [10139, 10147]] [5, [138064, 138073]] [6, [1747140, 1747151]] [7, [21136120, 21136131]] [8, [248008018, 248008030]] [9, [2846548027, 2846548039]] [10, [32130158308, 32130158321]] [11, [357948363077, 357948363091]] [12, [3945951430264, 3945951430279]] [13, [43124192297094, 43124192297110]] [14, [467888702914975, 467888702914991]] [15, [5045354828589526, 5045354828589543]] [16, [54118226280292469, 54118226280292486]] [17, [577829042746896145, 577829042746896164]] [18, [6144758226908675398, 6144758226908675417]] [19, [65112260263483892764, 65112260263483892784]] [20, [687769382578810314764, 687769382578810314785]] |

In general the actual answer is close to the middle of the range.

While just establishing such a range cannot be directly used for RH verification(the upper bound has to match the sign change count for that), it seems a lot of intricate research has been done to just use the analytic estimates. This paper is a good read towards that. https://www.jstor.org/stable/2006473 Theorem 3.2 is key, and then verification boils down to finding the required number of sign changes in Gram blocks, with the minor restriction that verification can be done for certain T values, not for arbitrary ones. It also now seems the work done by Odlyzko, Gourdon-Demichel and others is an

Recently I've been working on duplicating the results of Richard Brent on verifying RH to some height T; in 1979, Brent had a paper (same as the JSTOR one), one verifying RH for the first 75,000,000 non-trivial zeros.

Link to paper at the AMS:

"On the zeros of the Riemann zeta function in the critical strip"

https://www.ams.org/journals/mcom/1979-33-148/S0025-5718-1979-0537983-2/

I posted about RH verification at my blog today with some

references:

https://meditationatae.wordpress.com/2018/10/21/checking-the-riemann-hypothesis/

For counting zeros (whether on Im(s) = 1/2 or off it, but excepting trivials),

there's Turing's method. Brent has a Theorem on consecutive Gram blocks

satisfying Rosser's rule (enough of them guarantee that there are

no zeros violating RH in the strip up to height T), it's

Theorem 3.2 of his 1979 paper :

see screenshot...

The Gram blocks method & Rosser's rule constitute a cheap, efficient way of

locating almost all sign changes in the Z(t) function.

I'm attaching the C source code file to try to reproduce Brent's

counting of zeta zeros on the critical line.

The program makes no attempt at counting both on and

off - critical line zeros, say by numerical contour

integral or Turing's method.

extension of this method.

So the contour integral approach will be useful if we can find a way to contour integrate within the strip as well and not depend on the sign change counts, otherwise we can just rely on the latter.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/113#issuecomment-429630342, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If8LACglrmIlLBPt_ia1NWzemXbuLks5uk0hVgaJpZM4W-3lD.

km-git-acc commented 5 years ago

@mariotrevi @rudolph-git-acc

Great. The way I understand it, Brent's (and Lehman's paper) use Turing's work with some additional/corrected theorems, and then RH verification boils down to as you mentioned - finding enough consecutive Gram blocks satisfying Rosser's rule, and a separate evaluation using contour integrals is not necessary in this method.

Will try out the code posted on your blog. Do you think sign change counting can be parallelized in some way (eg. different machines working on different T ranges, or something more clever), so that we can take advantage of a grid computing setup where a lot of processing power is available. This should help make some of the conditional dbn bounds unconditional. We could start beyond the T=3*10^10 region, reverifying the range upto 10^13, and proceeding further from there, although as you mentioned that may require implementation of O.S or related methods.

mariotrevi commented 5 years ago

@KM , @rudolph

On 10/22/2018 09:48 AM, KM wrote:

@mariotrevi https://github.com/mariotrevi @rudolph-git-acc https://github.com/rudolph-git-acc

Great. The way I understand it, Brent's (and Lehman's paper) use Turing's work with some additional/corrected theorems, and then RH verification boils down as you mentioned - finding enough consecutive Gram blocks satisfying Rosser's rule, and a separate evaluation using contour integrals is not necessary in this method.

Yes, I also believe that counting all zeros in the strip with 0< Im(rho) < T , sometimes denoted N(T), can be done without contour integrals,  after Turing's work.

The handling of exceptions to Rosser's rule takes some care. Typically, in an exception to Rosser's rule, a Gram block of length 2 has no zeros, and the 2 missing zeros can be accounted for either in the Gram block that follows immediately, or in the Gram block that comes just before the one with no zeros. Presently, I have three arrays a[] , b[] and c[] where array a has the index of the Gram point at the beginning of the block, b has the index of the Gram point at the end of the Gram block, and c has the count of zeros (sign changes) that have been detected in the Gram block.

In my run from g{80e6} to g{200e6} I found 79 violations of Rosser's rule, and I had found 15 violations in the range g0 to g{80e6}. The total of 104 agrees with the paper by Brent , L, R and W. Unfortunately, I had given a dimension of only 20,000,000 to two of these, while about 125,000,000 was needed. So I lost the data on Gram points and zero counts that the program uses to do the zeros book-keeping in the the presence of violations of Rosser's rule.

Will try out the code posted on your blog. Do you think sign change counting can be parallelized in some way (eg. different machines working on different T ranges, or something more clever), so that we can take advantage of a grid computing setup where a lot of processing power is available. This should help make some of the conditional dbn bounds unconditional. We could start beyond the T=3*10^10 region, reverifying the range upto 10^13, and proceeding further from there, although as you mentioned that may require implementation of O.S or related methods.

Yes, I think dividing up the range shouldn't be too difficult, and can be parallelized. The ranges could probably be set up as millions of consecutive Gram blocks, so one range could go from g{0} to g{75e6} (good Gram points), the next one begin at g_{75e6} and so on. That way the whole is divided into contiguous non-overlapping intervals, and we can add up the counts of zeros on the critical line for all the intervals. I believe Turing's method can be used after this, and involves S(t) near the upper limit T.

It took about 35 hours to separate almost 125,000,000 zeros or 3.5 million zeros an hour. T=3*10^10 corresponds to about 100 billion zeros. Using only the Riemann-Siegel formula, the time to height T grows like T^(3/2) .

I read some on deriving safe and workable formulas for the 3-5 terms in the R.S. remainder. Pugh, who did an M.Sc at UBC used Maple and Taylor series to turn the terms with some having high-order derivatives into Taylor series or polynomials, with ~ 20 digits precision.  Someone else used orthogonal polynomials, the Chebyshev polynomials. I use numerical differentiation (approximation) using finite differences: https://en.wikipedia.org/wiki/Finite_difference_coefficient However, it is only a standby solution, as a rigorous error analysis would be difficult.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/113#issuecomment-431840999, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If_RsjGQZR966hYtl0dsOrAxy7ZwAks5uncy_gaJpZM4W-3lD.

mariotrevi commented 5 years ago

There was a presentation on ZetaGrid ca. 2003 that is of some interest, e.g. the Fortran code came from van Lune et al in the Netherlands: http://think-automobility.org/documents/ZetaGrid-Conference_in_honour_of_Hugh_Williams_2003.pdf

km-git-acc commented 5 years ago

Thanks. Will read up more on this and start with small experiments. Is there a link to the complete code you are using (i think the file riemannsiegel3.gp which you referred to in the blog, but the link is missing).

Regarding the tloop, there are now only 222 t ranges left.

0.0000098856  0.000010238
0.00001448  0.000014833
0.000033242  0.000033597
0.000034306  0.00003466
0.00003466  0.000035015
0.00012099  0.00012134
0.00023971  0.00024008
0.00035788  0.00035825
0.00035825  0.00035862
0.00036679  0.00036716
0.00038093  0.0003813
0.00038764  0.00038801
0.00040518  0.00040556
0.00040817  0.00040855
0.00041865  0.00041902
0.00041902  0.0004194
0.00044341  0.00044378
0.00049281  0.00049318
0.00050796  0.00050834
0.00052847  0.00052885
0.00062227  0.00062266
0.00069005  0.00069044
0.00069122  0.00069161
0.00072243  0.00072282
0.00075221  0.0007526
0.00075731  0.0007577
0.00079553  0.00079592
0.00080106  0.00080146
0.00080779  0.00080818
0.00080897  0.00080937
0.00081016  0.00081056
0.00081214  0.00081254
0.00082522  0.00082562
0.00096385  0.00096425
0.00098857  0.00098898
0.0010154  0.0010158
0.0010223  0.0010227
0.0010337  0.0010342
0.0012354  0.0012359
0.0012477  0.0012483
0.0012544  0.001255
0.0012584  0.0012589
0.0012651  0.0012657
0.0012663  0.0012668
0.0012668  0.0012674
0.0012725  0.001273
0.0012798  0.0012804
0.0012928  0.0012934
0.0012945  0.001295
0.0012967  0.0012973
0.001316  0.0013166
0.0013336  0.0013342
0.0013439  0.0013444
0.0014096  0.0014102
0.0014137  0.0014142
0.0015619  0.0015625
0.0016037  0.0016043
0.0016061  0.0016067
0.0016215  0.001622
0.0016648  0.0016654
0.0018124  0.001813
0.0018289  0.0018295
0.0019693  0.00197
0.0019706  0.0019712
0.0020187  0.0020193
0.0021656  0.0021663
0.0023321  0.0023327
0.0023682  0.0023689
0.0023794  0.0023801
0.0023801  0.0023807
0.0023807  0.0023814
0.0024344  0.002435
0.0025978  0.0025985
0.0026081  0.0026087
0.0026142  0.0026149
0.0026149  0.0026156
0.002791  0.0027917
0.0060378  0.0060397
0.0075107  0.0075134
0.0075189  0.0075216
0.0075244  0.0075271
0.0075326  0.0075354
0.0075354  0.0075381
0.0075408  0.0075436
0.0075491  0.0075518
0.0075684  0.0075711
0.0075739  0.0075766
0.0075822  0.0075849
0.0075988  0.0076015
0.0076015  0.0076043
0.0076043  0.0076071
0.0076071  0.0076099
0.0077585  0.0077613
0.0077641  0.007767
0.0077783  0.0077812
0.0078411  0.007844
0.0078469  0.0078497
0.0078583  0.0078612
0.0078612  0.0078641
0.0078814  0.0078843
0.0078987  0.0079016
0.0079045  0.0079074
0.0081447  0.008148
0.0081683  0.0081717
0.0081784  0.0081818
0.0082022  0.0082055
0.0082704  0.0082738
0.0082841  0.0082875
0.0083013  0.0083048
0.0083844  0.0083879
0.0083949  0.0083984
0.0083984  0.0084019
0.0084019  0.0084054
0.0084369  0.0084404
0.0086797  0.0086834
0.0087271  0.0087308
0.0087528  0.0087564
0.0087916  0.0087957
0.0090252  0.0090294
0.0090464  0.0090506
0.0090933  0.0090976
0.0091622  0.0091665
0.0092186  0.009223
0.0092667  0.0092711
0.0093862  0.0093906
0.0093906  0.0093951
0.0094085  0.009413
0.0095173  0.0095223
0.0095373  0.0095423
0.0097771  0.0097823
0.0098764  0.0098816
0.0099345  0.0099398
0.0099771  0.0099824
0.010063  0.010068
0.010237  0.010243
0.010249  0.010255
0.010414  0.01042
0.010464  0.01047
0.010539  0.010546
0.010609  0.010615
0.010622  0.010628
0.010654  0.01066
0.010692  0.010699
0.010705  0.010712
0.010825  0.010832
0.010968  0.010976
0.011005  0.011012
0.011123  0.01113
0.011152  0.01116
0.01119  0.011197
0.011549  0.011558
0.011575  0.011583
0.011844  0.011853
0.011976  0.011985
0.012034  0.012044
0.012063  0.012073
0.012152  0.012162
0.012212  0.012222
0.012262  0.012272
0.012312  0.012323
0.012374  0.012385
0.012418  0.012429
0.012473  0.012484
0.012687  0.012698
0.012826  0.012838
0.012863  0.012875
0.012875  0.012888
0.012938  0.01295
0.013724  0.01374
0.013931  0.013948
0.014421  0.01444
0.014517  0.014537
0.014635  0.014655
0.014714  0.014734
0.015034  0.015056
0.015125  0.015149
0.015242  0.015266
0.015266  0.01529
0.01529  0.015313
0.015313  0.015337
0.015337  0.015361
0.015513  0.015538
0.015642  0.015668
0.015668  0.015695
0.015833  0.015861
0.015861  0.015889
0.015889  0.015917
0.015917  0.015945
0.016004  0.016034
0.016278  0.01631
0.01644  0.016473
0.016506  0.01654
0.01654  0.016575
0.01661  0.016644
0.016715  0.01675
0.017052  0.017092
0.017254  0.017296
0.017296  0.017339
0.017513  0.017559
0.017604  0.01765
0.017842  0.017891
0.017993  0.018045
0.018545  0.018606
0.018794  0.018858
0.018858  0.018924
0.019349  0.019425
0.019425  0.019503
0.019582  0.019664
0.019664  0.019747
0.019747  0.019833
0.019833  0.01992
0.02001  0.020101
0.020101  0.020196
0.020196  0.020291
0.020291  0.020391
0.020598  0.020706
0.020706  0.020818
0.020818  0.020934
0.020934  0.021054
0.021054  0.021177
0.025558  0.026019
0.026019  0.026553
mariotrevi commented 5 years ago

@km-git-acc ? epsilon %1008 = 1.00000000000000000000000

The user-defined PARI/gp functions below can be copied and pasted into a gp session at the "?" prompt:

C4 = (X)->d12psi(p(X))/(2038431744(Pi^8))+11d8psi(p(X))/(5898240(Pi^6))+19d4psi(p(X))/(24576(Pi^4))+Psi(p(X))/(128(Pi^2))

N = (X)->floor(u(X)^2)

Psi = (Z)->cos(2Pi(Z^2-Z-1.0/16))/cos(2PiZ)

R1 = (X)->(-1)^(N(X)-1)*Psi(p(X))/u(X)

R2 = (X)->(-1)^(N(X)-1)(Psi(p(X))/u(X)-(1/(96Pi^2))*d3psi(p(X))/(u(X)^3))

R3 = (X)->(-1)^(N(X)-1)(Psi(p(X))/u(X)-(1/(96Pi^2))d3psi(p(X))/(u(X)^3)+(d2psi(p(X))/(64(Pi^2))+d6psi(p(X))/(18432*(Pi^4)))/(u(X)^5))

R4 = (X)->(-1)^(N(X)-1)(Psi(p(X))/u(X)-(1/(96Pi^2))d3psi(p(X))/(u(X)^3)+(d2psi(p(X))/(64(Pi^2))+d6psi(p(X))/(18432(Pi^4)))/(u(X)^5)-(d1psi(p(X))/(64(Pi^2))+d5psi(p(X))/(3840(Pi^4))+d9psi(p(X))/(5308416(Pi^6)))/(u(X)^7))

R5 = (X)->(-1)^(N(X)-1)(Psi(p(X))/u(X)-(1/(96Pi^2))d3psi(p(X))/(u(X)^3)+(d2psi(p(X))/(64(Pi^2))+d6psi(p(X))/(18432(Pi^4)))/(u(X)^5)-(d1psi(p(X))/(64(Pi^2))+d5psi(p(X))/(3840(Pi^4))+d9psi(p(X))/(5308416(Pi^6)))/(u(X)^7)+(d12psi(p(X))/(2038431744(Pi^8))+11d8psi(p(X))/(5898240(Pi^6))+19d4psi(p(X))/(24576(Pi^4))+Psi(p(X))/(128(Pi^2)))/(u(X)^9))

Z = (X)->2sum(Y=1,floor(sqrt(X/(2Pi))),cos(rstheta(X)-X*log(Y))/sqrt(Y))

d12psi = (Z)->(Psi(Z-6epsilon)-12Psi(Z-5epsilon)+66Psi(Z-4epsilon)-220Psi(Z-3epsilon)+495Psi(Z-2epsilon)-792Psi(Z-epsilon)+924Psi(Z)-792Psi(Z+epsilon)+495Psi(Z+2epsilon)-220Psi(Z+3epsilon)+66Psi(Z+4epsilon)-12Psi(Z+5epsilon)+Psi(Z+6*epsilon))/(epsilon^12)

d1psi = (Z)->(-0.5Psi(Z-epsilon)+0.5Psi(Z+epsilon))/epsilon

d2psi = (Z)->(Psi(Z-epsilon)-2*Psi(Z)+Psi(Z+epsilon))/(epsilon^2)

d3psi = (Z)->(-0.5Psi(Z-epsilon2)+Psi(Z-epsilon)-Psi(Z+epsilon)+0.5Psi(Z+epsilon2))/(epsilon^3)

d4psi = (Z)->(Psi(Z-2epsilon)-4Psi(Z-epsilon)+6Psi(Z)-4Psi(Z+epsilon)+Psi(Z+2*epsilon))/(epsilon^4)

d5psi = (Z)->(-0.5Psi(Z-3epsilon)+2Psi(Z-2epsilon)-2.5Psi(Z-epsilon)+2.5Psi(Z+epsilon)-2Psi(Z+2epsilon)+0.5Psi(Z+3epsilon))/(epsilon^5)

d6psi = (Z)->(Psi(Z-3epsilon)-6Psi(Z-2epsilon)+15Psi(Z-epsilon)-20Psi(Z)+15Psi(Z+epsilon)-6Psi(Z+2epsilon)+Psi(Z+3*epsilon))/(epsilon^6)

d8psi = (Z)->(Psi(Z-4epsilon)-8Psi(Z-3epsilon)+28Psi(Z-2epsilon)-56Psi(Z-epsilon)+70Psi(Z)-56Psi(Z+epsilon)+28Psi(Z+2epsilon)-8Psi(Z+3epsilon)+Psi(Z+4*epsilon))/(epsilon^8)

d9psi = (Z)->(-Psi(Z-5epsilon)+8Psi(Z-4epsilon)-27Psi(Z-3epsilon)+48Psi(Z-2epsilon)-42Psi(Z-epsilon)+42Psi(Z+epsilon)-48Psi(Z+2epsilon)+27Psi(Z+3epsilon)-8Psi(Z+4epsilon)+Psi(Z+5epsilon))/(2*(epsilon^9))

p = (X)->u(X)^2-N(X)

rs1 = (X)->Z(X)+R1(X)

rs2 = (X)->Z(X)+R2(X)

rs3 = (X)->Z(X)+R3(X)

rs4 = (X)->Z(X)+R4(X)

rs5 = (X)->Z(X)+R5(X)

rstheta = (X)->(X/2.0)log(X/(2.0Pi))-X/2.0-Pi/8.0+1/(48.0X)+7.0/(5760.0X^3)+31.0/(80640X^5)+381.0/(1290240X^7)

u = (X)->(X/(2*Pi))^(0.25)

mariotrevi commented 5 years ago

@km-git-acc I use \p 97 and epsilon = 0.00001 or less. ? \p realprecision = 115 significant digits (97 digits displayed) ? epsilon %1064 = 9.9999999999999999999999999999999999999999999999999999999999999999999999999999 E-6

mariotrevi commented 5 years ago

I'll try selecting the file *.gp . riemannsiegel6.gp.txt

rudolph-git-acc commented 5 years ago

@km-git-acc

Just arrived home. The 9x10e21 run has reached a t=0.0028 at ~16000 rectangles. I expect it requires a few more days to complete ('num' per rectangle started at 11 mln and is currently around 7 mln. A 'num' of 4mln required ~2 days to complete in the previous run). When t > 0.01 it speeds up considerably.

km-git-acc commented 5 years ago

@rudolph-git-acc 51 tasks left now

0.016824  0.016862
0.016787  0.016824
0.0088866  0.0088908
0.0087998  0.0088039
0.0086688  0.0086725
0.0086074  0.008611
0.0084474  0.0084509
0.0084194  0.0084229
0.0083324  0.0083358
0.0083289  0.0083324
0.0063739  0.0063759
0.0062207  0.0062227
0.0059356  0.0059375
0.0051417  0.0051432
0.0045663  0.0045674
0.0045416  0.0045427
0.0045281  0.0045293
0.0045203  0.0045214
0.0044924  0.0044935
0.0044491  0.0044502
0.004229  0.0042301
0.0037762  0.0037772
0.003534  0.003535
0.0032886  0.0032896
0.0027847  0.0027854
0.0027833  0.002784
0.0025463  0.002547
0.0023788  0.0023794
0.00055629  0.00055667
0.00054789  0.00054827
0.00054675  0.00054713
0.00054408  0.00054446
0.0005437  0.00054408
0.00021239  0.00021275
0.00021021  0.00021057
0.00015517  0.00015553
0.00014508  0.00014544
0.00014111  0.00014147
0.00013428  0.00013464
0.00013284  0.0001332
0.00013104  0.0001314
0.00013068  0.00013104
0.00013032  0.00013068
0.00012996  0.00013032
0.00012961  0.00012996
0.00012709  0.00012745
0.00012529  0.00012565
0.00012278  0.00012314
0.00012242  0.00012278
0.00012206  0.00012242
0.0001217  0.00012206

@mariotrevi Thanks, checking out the gp file. It seems rs5 and rstheta are the main functions. Are gram points g_n found by using the rstheta with a root solver function. Also, is there also a companion file which has the entire loop procedure traversing the gram points?

mariotrevi commented 5 years ago

@km-git-acc The gp file implements the Riemann-Siegel formula with 1, 2, 3, 4, and 5 correction terms, beyond the main term. These are the functions rs1 through rs5. rstheta is the Riemann-Siegel theta function and is used to find Gram points, which are solutions of theta(t) = n*pi, n = 0,1,2, ... . The root solving for the Gram points and the counting of sign changes are implemented in a C language program gramriemannsiegel3gp800a.c which duplicates the zeta root-isolation range of Brent 1979 reported on in: On the zeros of the Riemann zeta function in the critical strip at https://www.ams.org/journals/mcom/1979-33-148/S0025-5718-1979-0537983-2/ . gramriemannsiegel3gp800a.c.txt

rudolph-git-acc commented 5 years ago

@km-git-acc @mariotrevi

After playing with the various zero counting formulae, I found a peculiar closed form for a contour integral. Not sure if it is correct, however the results are (almost) accurate.

It is well known that:

image

Both sides have only zero at 1/2 +/- 6.289...i. Interestingly when both sides are integrated and subtracted as follows:

image

we get the number of non-trivial zeros < T. This integral has the following closed form:

image

that evaluates extremely fast in ARB also at very high T. The beauty is that only the vertical sides of the contour need to be evaluated since the imaginary parts of the integral across the horizontal lines always end up zero. There definitely is a contour at work here, since when I widen the contour (currently -1...2), I get more zeros (i.e. the trivial ones start to get counted as well). I believe that when the contour width is reduced to 0 (at Re(s)=1/2) we exactly get the closed form for counting zeros on the line only.

There is a snag though, at T=10^9 I found the zero count to be 1 off, although when you move T slightly up or down it synchronises again. I haven't been able to figure out why, but maybe by selecting the right (thin zero density) spot, this fine-tuning could be controlled. I am keen to better understand this integral. Do you guys see why this actually works?

EDIT: here is a table comparing the root count using the contour and only the critical line:

image

For the ones that differ, I could check the lower ones and these have indeed all nearby zeros (distance <0.11).

mariotrevi commented 5 years ago

@km-git-acc @rudolph-git-acc The zeros computed by David Platt can be browsed here: http://www.lmfdb.org/zeros/zeta/ The version of his paper just prior to publication is available here: https://research-information.bristol.ac.uk/files/78836669/platt_zeta_submitted.pdf After Theorem 5.1 he writes: "The imaginary parts of these zeros (13 bytes each so occupying a total of 1 . 3 Tbytes) have been stored and made available by Bober via the LMFDB project [5]. Researchers are invited to contact the author if they require copies of some or all of this data for their own purposes."

km-git-acc commented 5 years ago

@rudolph-git-acc @mariotrevi Well the results using the new formula do look amazing. Also, can you share a link to the paper with the psi(s) formula for further reading..and the gp code..

I have downloaded the gramriemannsiegel3gp800a.c.txt file and will try compiling it. Platt had isolated all the zeroes in his work to high precision upto a certain T (around 3*10^10), so we should definitely begin on the region beyond that.

rudolph-git-acc commented 5 years ago

@km-git-acc @mariotrevi

On the new formula, here are a few links on the psi(s) formula:

https://mathoverflow.net/questions/134361/complex-zeros-of-zetas-zetas-zeta1-s-zeta1-s-simpler-expres

https://mathoverflow.net/questions/206198/are-all-complex-zeros-of-dfrac-zeta-zetas-pm-dfrac-zeta-zeta1-s

Simplifying this equation:

image

from the latter link yields:

I also asked about it on MSE today, and it appears branch cuts of logs play a big role in explaining the difference between the two integrals (Maple handles this differently from Mathematica):

https://math.stackexchange.com/questions/2969120/does-this-contour-integral-actually-count-the-roots-of-zetas-with-imaginary

I just checked the intnum in pari/gp and there both integrals are equal. So, this seems to be a phenomenon of (analytical) integral evaluation in Maple. Here is the code for the count in pari/gp:

N(t)=1/(2*Pi)*(imag(lngamma(2+t*I))+Pi-t*log(2*Pi)+arg(zeta(2+I*t))-arg(zeta(-1-I*t)))

The question however remains on what the closed form N(T) is actually counting that brings it so close to the non-trivial zero count...

P.S. The 9x10e21 Tloop has reached 21.000 rectangles today and has now covered all t= 0...0.0041.

km-git-acc commented 5 years ago

@rudolph-git-acc @mariotrevi Getting the correct answers quickly at such large heights should mean something is indeed right. also checked the MSE conversation.

The tloop output for 9x10e21 is now complete, and is uploaded here https://github.com/km-git-acc/dbn_upper_bound/blob/master/output/windingnumbers/windnum_nolemma_x9000000000000000070686_y_0.11832_1_t_0_0.093.txt so conditionally we now have dbn <= 0.1!!

rudolph-git-acc commented 5 years ago

@km-git-acc

Woooooohoooooo !!! A conditional DBN <= 0.1 :-)

I'll let my run finish as well so we can compare output (it is accelerating, so estimate < 3 days).

On counting the zeros and trying to 'nibble on the conditional DBN domain', I believe it all boils down to:

  1. Counting zeros on the critical line. For this a closed form exists (Z(t)) that evaluates fast at high T.

  2. Counting zeros in a rectangle that encompasses the critical strip. For this no closed form exists (to my knowledge) and there is always an integral involved (so gets increasingly slower). I got confused for a moment by the formulae in this link: https://fr.wikipedia.org/wiki/Fonction_zêta_de_Riemann#La_fonction_S(T) that appears to suggest a closed form. After some closer reading I found that the arg-function actually is defined as the half contour again, hence the two integrals from Fredrik's formula. That still leaves the question open what it is that the new formula is counting and why the results are so close to the critical line count.

EDIT: worked the formula further in Maple and managed to get rid of all the zeta's. This function computes all N(T) in a fraction of a second even at very high T. What is it that is being counted?

image

Further simplifications appear possible, however I found some counts to then become one less, so was cautious going further.

Pari/gp code:

N(T)=1/(2*Pi)*(arg(cosh(T*Pi/2))+Pi-arg(-2^(-1-I*T)*Pi^(-2-I*T)*sin((1+I*T)*Pi/2)*gamma(2+I*T))+imag(lngamma(2+I*T)-I*T*log(2*Pi)))
print(N(1000000000000000));
N(T)=1/(2*Pi)*imag((-2*log(Pi)-log(2)+I*Pi-log(2^(-1-I*T)*Pi^(-2-I*T)*sin(-(1+I*T)*Pi/2)*gamma(2+I*T))+lngamma(2+I*T)-I*T*log(2*Pi)+log(cosh(T*Pi/2))))
print(N(100000));

Used this code to compare counting the zeros on the line f(t) with counting within the contour N(T):

default(realprecision, 10)
f(t)=imag(lngamma(1/4+I*t/2))/Pi-t/(2*Pi)*log(Pi)+imag(log(zeta(1/2+I*t)))/Pi+1;
N(T)=1/(2*Pi)*imag((-2*log(Pi)-log(2)+I*Pi-log(2^(-1-I*T)*Pi^(-2-I*T)*sin(-(1+I*T)*Pi/2)*gamma(2+I*T))+lngamma(2+I*T)-I*T*log(2*Pi)+log(cosh(T*Pi/2))))
forstep(i=14,15,0.01, print(i, ", ",round(f(i)),", ",round(N(i)), ", ", round(f(i)-N(i))));

and it pretty clear that I am not counting non-trivial zeros with the contour but something 'close' to it (guess some zeros related to a difference between (log) gamma-functions). This route is therefore not going to help obtaining a fast closed form for the contour integration required to find the non-trivial zeros.

mariotrevi commented 5 years ago

@rudolph-git-acc The determination of N(T) is possible using Turing's method, and a count of zeta zeros on the critical line. Turing's method gives an upper bound for N(g_m), where g_m is a Gram point. For example, one may obtain N(g_m) <= m+1 using Turing's method, and have found m+1 zeta zeros on the critical line up to height g_m. Then N(g_m) = m+1. This is covered in Glen Pugh's thesis in Section 4.3 pages 40 to 44. Here's a link to Pugh's thesis: https://web.viu.ca/pughg/thesis.d/masters.thesis.pdf

rudolph-git-acc commented 5 years ago

@km-git-acc @mariotrevi

Asked Fredrik about the slowness of the contour integral for T > 10^9 and although he has a few ideas on how to further optimise the evaluation of zeta at higher T, his steer is to opt for Turing's method and remove the need for a contour entirely.

Thanks for the link, Mario. Will study this further. Turing's method is also explained in Edwards book on the zeta-function:

image

km-git-acc commented 5 years ago

@rudolph-git-acc @mariotrevi It's possible that S(T) which is essentially a change in the argument of zeta around the contour is getting calculated approximately by Maple by subtracting the 2 arg values, and the approximation is good enough that the results almost never differ from the actual by much, which is why we are seeing such close answers. S(T) also changes sign infinitely often, so for most T the delta arg value would be small. It would be interesting to see whether at a large T height (say close to T=3*10^10 where we know actual zeroes from the lmfdb link), if we choose a T range of 100k or 1million, and evaluate the Maple function at say all integers T in that range, what proportion of such T does the N(T) answer match or stay close to actual N(T) (EDIT: although going by this paper https://arxiv.org/abs/1309.1526 and assuming o(1) ~ 0, the S(T) bound comes out to be 1.89 at 3\10^10, and if the bound is not tight enough, the answers may always differ by atmost 1. We would have to test the same at a much larger height, but there we don't know the actual zeroes)

I didn't get enough time to compile and test the C file that uses Turing's method, but hope to start on it from next week. Alternatively, given you have been running it on your local machine, if you can share the executable (which takes only 2 inputs T1 and T2) and all it's library dependencies, I can turn it into a self contained docker package which can then be run from anywhere and easily processed on the Boinc grid.

mariotrevi commented 5 years ago

@km-git-acc @rudolph-git-acc I think one has to integrate zeta'/zeta from 3/2 to 3/2+iT then from 3/2 +iT to 1/2+iT to find N(T) as in Section 6.6 of Edwards' book, or Section 4.3 of Glen Pugh's thesis. In other words, one can't simply take log(zeta(1/2+IT)) - log(zeta(3/2)) as the value of the integral. According to an article by Goldston, S(t) is unbounded as t-> oo ; this is from an Omega result: | S(t)| = Omega( log(T)^(1/3)/log(log(T))^(7/3) ) of Selberg, formula 1.3 in http://www.math.sjsu.edu/~goldston/MyS(t).pdf Because S(t) is unbounded, N(T) can be arbitrarily far from the average theta(T)+1, and I'm not convinced exact formulas for N(T) exist without zeta or xi in them ... The latest version of my C code has two lines where one defines the initial Gram point and the final Gram point. The program counts the sign changes of Z(t) between those two Gram points. As it is now, the good working of the program requires that the initial Gram point g_m and the last Gram point g_n both be good, i.e. (-1)^m Z(g_m)>0 and (-1)^nZ(g_n) > 0. This is so that the initial Gram point begins a Gram block, and the last Gram point finishes a Gram block. The root counting is done over Gram blocks. The program counts zeros of Z(t) on an interval of consecutive contiguous Gram blocks and reports the number. It doesn't verify RH at present. I'm attaching gramriemannsiegel3gp1400a.c.txt with the C source code. gramriemannsiegel3gp1400a.c.txt

mariotrevi commented 5 years ago

@km-git-acc @rudolph-git-acc I find that S(282.46) = N(282.46)-theta(282.46)-1 = -1.0031975, and N(282.46) = 126. (The 127'th zero is 282.465114765 and the 126'th zero is 279.2292509.) ? NN(282.46) = 127.00000000000000000000000000000000000 where my NN is Rudolph's N, i.e.: NN = (T)->1/(2Pi)(arg(cosh(TPi/2))+Pi-arg(-2^(-1-IT)Pi^(-2-IT)sin((1+IT)Pi/2)gamma(2+IT))+imag(lngamma(2+IT)-ITlog(2*Pi))) . ? NN(281.63) = 127.0 I'm attaching gramriemannsiegel3gp1600a.c.txt , which can count sign changes of Z(t) at heights 10^10 and more. But Riemann-Siegel is quite slow at t = 10^10. gramriemannsiegel3gp1600a.c.txt

rudolph-git-acc commented 5 years ago

@km-git-acc @mariotrevi

The zeta-function on ARB evaluates pretty fast even at T > 10^10 (using RS), so once Turing's method is fully working we could try to replicate it in ARB.

I have read Glen Pugh's thesis once again and Turing's method is becoming clearer each time I go through it. The one thing I am confused about is the results section 5.2. What I read is:

  1. count all the Gram points up to g_12193873 = 6000000.485 999
  2. perform Turing's correction method from g_12193873...g_12193873+19 (both end points must be good Gram points) and check whether the outcome of the calculations is < 2. If so, add 1 to 12193873.

Do I understand it correctly that all the Gram blocks (i.e. bad Gram points bracketed by good Gram points) < g_12193873 are basically ignored in this method and that the only evaluation of Z(t) happens in the additional Gram block (as under 2.)? Or are all these Gram blocks actually counted as 1 good Gram point and are the Z(t) zeros in between the block 'brackets' counted as good Gram-points instead of the bad Gram points? I guess the latter, but still confused.

mariotrevi commented 5 years ago

@rudolph-git-acc @km-git-acc The explanation of root counting isn't all that detailed. The point of identifying Gram blocks is that they usually satisfy Rosser's rule, which is that a Gram block [gm, g{m+k}] has at least k roots of Z(t). My understanding is that Pugh's root counting functions show that N(g_12193873) >= 12193874, the figure in point 2. of section 5.2 . N(T) = S(T) + theta(T)/pi + 1, so if S(T)<=0, then N(T)<= theta(T)/pi + 1. Pugh shows that S(g_12193873) <= 0 using Turing's method [page 48], and it follows that N(g_12193873) <= theta(g_12193873)/pi +1 = 12193873+1 = 12193874. Turing's method only involves a few evaluations of Z(t) starting at g_12193873, and it shows N(g_12193873) <= 12193874. However, Turing's method doesn't locate the <= 12193874 roots on the critical line (some could be off it). So root counting of Z(t) is also needed to locate 12193874 roots as being on the critical line. So combining the root finding with this last inequality gives N(g_12193873) = 12193874 , with all these roots lying on the critical line.

Point 3. of Section 5.2 had me confused, as it reports 3.3 million roots in 1.5 million Gram blocks. I believe Pugh counts only Gram blocks of length >=2, although Brent [1979] counts Gram blocks of lengths >=1. Since Rosser's rule is mentioned in Pugh's source code [page 66] , I'll give a link to Brent's paper: http://www.ams.org/journals/mcom/1979-33-148/S0025-5718-1979-0537983-2/S0025-5718-1979-0537983-2.pdf Sections 3. and 4. mention Gram blocks, Rosser's rule and Turing's method.

rudolph-git-acc commented 5 years ago

@km-git-acc @mariotrevi

Just for the purpose of better understanding Turing's method, I have now fully replicated the code from Pugh's thesis into pari/gp. Note that I currently have simply used the exact formula for Z(t) (i.e. evaluating Zeta(1/2+I*t) instead of the RS-approximation), whence it will quickly get slower at higher t. Nice to see how Gram-blocks are being 'repaired' by counting zeros of Z(t) instead of the bad gram points and also how the calculation of the S(t)-bound at the end provides the additional evidence that the RH is true < than the chosen Gram-point. As an extra check, I have also added the N0(t) counting function for the non-trivial zeros on the critical line. Will now try to rewrite it in ARB that will allow using the fast RS proxy for Z(t) and should bring the required speed for scaling

default(realprecision, 10);

\\Test counting function on the critical line to verify Turing's method works correctly
N0(t)=return(imag(lngamma(1/4+I*t/2))/Pi-t/(2*Pi)*log(Pi)+imag(log(zeta(1/2+I*t)))/Pi+1);

\\Riemann-Siegel Z-function
Z(t) = return(exp(I*(imag(lngamma(I*t/2+1/4))-t/2*log(Pi)))*zeta(1/2+I*t));

\\Returns 1 if n=even and -1 if n=odd
even(n)=return(if(n%2==0,1,-1))

\\Function to establish a good approximation for the n-th Gram-point
gram(n) = {
    tn=0; tn1=0.5*n+20;
    while(abs(tn-tn1) > 0.00000001, tn=tn1; tn1 = tn-(tn*log(tn/Pi/2)/2-tn/2-Pi/8+1/tn/48+7/5760/(tn^3)- n*Pi)/(log(tn/Pi/2)/2-1/(tn^2)/48-7/1920/(tn^4)));
return(tn1);
}

\\Function to check that the number of zeros of Z(t) within a block of Gram-points gn to gm meets the expected number
gramblock(gn, gm) = {
    p=1; num=0;
    while(num < (gm-gn) && p < 1024,
        num=0; p=2*p; j=gn; gp2=gram(j);
        while(j <= (gm-1),
            k=0;
            while(k <= (p-1),
                gp1=gp2; gp2=gram(j)*(1-(k+1)/p)+gram(j+1)*(k+1)/p;
                if(real(Z(gp1))<0 && real(Z(gp2))>0, num=num+1);
                if(real(Z(gp1))>0 && real(Z(gp2))<0, num=num+1);
                k=k+1;
            );
        j=j+1;
        );
    );
    if (num < (gm - gn), print("Partition depth reached in Gram block: (",gn, ", ", gm,")"));
return(num);
}

\\find the adjustements h_m for bounding S(gn)=N(gn)-theta(gn)/pi-1
adjustments(gm, samp) = {
    k=1; ginit = gm - samp; step = 0.1; htot=0;
    g = vector(samp, j, gram(ginit+j));h = vector(samp, j, 0);
    while (k <= samp,
        n=0; 
        while((even(ginit+k)*real(Z(g[k]-n*step))) < 0 && (even(ginit+k)*real(Z(g[k]+n*step)) < 0), n=n+1);
        if(even(ginit+k)*real(Z(g[k]-n*step)) > 0, h[k] = -n*step, h[k]=n*step);
        htot = htot+h[k];
        if((k>1) && (g[k]+h[k] <= g[k-1]+h[k-1]), htot=0; step=step/2; k=1, k=k+1);
    );
    m=1;while(m <= samp, printf("%9.0f, %8.3f, %4.3f, %8.3f, %5.3f \n",ginit+m, g[m], h[m], g[m]+h[m], real(Z(g[m]+ h[m]))); m=m+1);
return(htot);
}

\\main program to verify the RH up till end-value
main(end) = {
    gi=-1; gf=end; tot_roots = 0; gb_roots=0;
    while(gi < gf,
        gi=gi+1;
        if (even(gi)*real(Z(gram(gi))) > 0, tot_roots=tot_roots+1,
            gbs = gi-1;
            while(even(gi)*real(Z(gram(gi))) < 0, gi=gi+1);
            gbe=gi; gb_roots = gramblock(gbs, gbe); tot_roots = tot_roots + gb_roots;
        );
    );
    if(gi > end, printf("\n WARNING: %3.0f is a bad Gram point and it has been augmented to the first good Gram-point %3.0f \n", end, gi));
    printf("\n");
    printf("The total number of non-trivial zeros <= the %3.0f-th Gram point (T=%3.3f) = %3.0f. \n", gi, gram(gi), tot_roots);
    printf("\n");
    printf("Please inspect the table below to ensure values in the 4th column are monotonically increasing: \n \n");
    htot=adjustments(gi+10, 10);    
    printf("\n");  
    sbound = 1+1/(gram(gi+10)-gram(gi))*(2.30+0.128*log(gram((gi+10))/(2*Pi))+htot);
    printf("The vslue of S(%3.3f) = %3.3f must be < 2 and if so, the RH is true <= %3.0f-th Gram point. \n", gram(gi), sbound, gi);
    printf("\n");
    printf("Please compare it to the number of roots counted on the critical line N0(%3.3f) = %3.0f. \n", gram(gi), N0(gram(gi)));    

return(tot_roots);
}

\\enter a gram point below which non-trivial zeros should be counted
main(1000);
mariotrevi commented 5 years ago

@rudolph-git-acc @km-git-acc That's great. I tested main and if I enter a bad Gram point, such as 126, it gives: main(126)

The total number of non-trivial zeros <= the 126-th Gram point (T=282.455) = 128 (it should be 126). I think main(126) counts the roots in the Gram block [g_125, g_127] , so it agrees with main(127)=128. ? N0(gram(126)) %23 = 128.0000000 One could numerically integrate zeta'/zeta as on pages 39 and 40 of Pugh's thesis. There's a sequence of bad Gram points at OEIS: https://oeis.org/A114856 I have a Pari/gp function n1 which counts the zeros below some height: n1(282.4547208, 4) %33 = 126.00000805927078481655223441099176444 with: n1 = (T,K)->imag(xf(T,K))/Pi+real(ntheta(T))/Pi+1

ntheta = (t)->(lngamma((2It+1)/4)-lngamma((-2It+1)/4))/(2I)-tlog(Pi)/2

xf = (T,K)->intnum(t=3/2,3/2+TI,zetap(t)/zeta(t),K)+intnum(t=3/2+TI,1/2+T*I,zetap(t)/zeta(t),K)

zetap = (X)->(zeta(X+epsilon)-zeta(X-epsilon))/(2*epsilon)

? epsilon %34 = 1.0000000000000000000 E-10

rudolph-git-acc commented 5 years ago

@km-git-acc @mariotrevi

Great, however there still is something not right in the code. When I run main(100000), I do get 1 difference between N0 and the Turing method. Not sure what's happening here. Could it be that it fails when you pick a 'bad' gram point as the highest T and if that is the case we only then need to add the final +1 correction step (as long as the S(T) bound < 2)?

P.S. We have access to an incredibly fast ARB-routine to perform the "classical' contour integral zeta'/zeta. It verifies the entire strip in < 30 minutes up till T=10^9 (!). Unfortunately above that it 'stalls' and gets extremely slow, so that only the Turing method will probably be feasible.

mariotrevi commented 5 years ago

@rudolph-git-acc @km-git-acc I edited your "main", replacing: while(even(gi)real(Z(gram(gi))) < 0, gi=gi+1); by: while(even(gi)real(Z(gram(gi))) < 0, gi=gi+1;print(gi," ")); When gi=126, gi is a bad Gram point, so the program increments gi to 127: ? main(126) 127 // from the print(gi," ")

The total number of non-trivial zeros <= the 126-th Gram point (T=282.455) = 128

You could require end to be a good Gram point, and print a message saying end is a bad Gram point and stop. When end is a good Gram point, the last Gram block will finish at "end". If end is a good Gram point, the zeros count should be <= actual zeros count. For example, a Gram block of length 2 can contain no zeros, and this is a violation of Rosser's rule , that a Gram block of length L contains at least L zeros. Brent's 1979 paper discusses Rosser's rule and some violations of Rosser's rule: https://www.ams.org/journals/mcom/1979-33-148/S0025-5718-1979-0537983-2/

rudolph-git-acc commented 5 years ago

@mariotrevi

Bit confused now. I believe the correct number of non-trivial zeros < the 126-th Gram point (i.e T=282.455) is 128 according to my N0-counting formula. This 128 is also correctly calculated by main(126), so I don't see where the problem is. In a Gram block of bad Gram points we should count the non-trivial zeros of Z(t) instead of the Gram points.

The main(100000) run does produce a difference of 1. Will need to take a deep dive on why that happens.

The total number of non-trivial zeros <= the 100000-th Gram point (T=74921.895) = 100003.

Please inspect the table below to ensure values in the 4th column are monotonically increasing:

100001, 74922.565, 0.100, 74922.665, -0.255 100002, 74923.234, 0.000, 74923.234, 0.861 100003, 74923.903, 0.000, 74923.903, -0.335 100004, 74924.573, 0.000, 74924.573, 2.465 100005, 74925.242, 0.000, 74925.242, -0.743 100006, 74925.912, -0.100, 74925.812, 0.111 100007, 74926.581, -0.100, 74926.481, -0.549 100008, 74927.250, 0.000, 74927.250, 5.637 100009, 74927.920, 0.000, 74927.920, -2.541 100010, 74928.589, 0.000, 74928.589, 5.158

The vslue of S(74921.895) = 1.508 must be < 2 and if so, the RH is true <= 100000-th Gram point.

Please compare it to the number of roots counted on the critical line N0(74921.895) = 100002.

rudolph-git-acc commented 5 years ago

@km-git-acc

The 9x10e21 run completed on my Threadripper machine today. Have uploaded the winding number output (added 'one go' to its name). It required continuous computations for ~14 days, but hey, this extreme run near 10^22 did complete successfully on a home computer :-)

Note that it required to process ~7000 rectangles less than the Boinc version (where I guess the splits induce some redundancy) and the winding number ended up also at 0.

Here is the updated table that now can be incorporated in the write-up (i guess with some additional lyrics).

image

km-git-acc commented 5 years ago

Great. By the way, since the three of us are assigned to the thread, we may not need to use the @usernames within this thread, and should still recieve email notifications. Let me know if this feature works as intended.

I checked the parigp code at my end and it's working. Will try to alter it to see how main can accept both 'start' and 'end' inputs to move towards parallelization. Also, the modification print gi in the while loop seems to print the bad gram points (or the ones just above them), which is useful information and could be written into a file instead of printing only on the screen.

Interesting to see that a single machine can tackle a huge task like the 9x10e21 tloop. The number of rectangles with a non-parallel process are indeed smaller since the t-steps are optimal. Will modify the writeup to include the above table.

rudolph-git-acc commented 5 years ago

Have now replicated the code in ARB, but not yet happy with the speed. Starting to wonder whether a two-staged approach (like we did with the Barrier) could work here as well. The first run could then be to produce a loop of Gram-points and label them as 'good' and 'bad'. Such a process would be very easy to cut into smaller chunks without having to worry about the cut-off points. If we would try to fix the gramblocks in the same run as well, the cut-off points would become much more difficult to reconcile (e.g. the cut could happen in the middle of a gramblock). Such a first stage would also build up a permanent "audit trial" for others to replicate the results and it could produce a lot of good statistics (number of gramblocks, size of gram blocks, etc.).

I did some timing checks on the current ARB code. Just establishing the Gram points can be done at a rate of 1mln/10sec. However also checking whether it is a good or a bad gram point currently adds quite a bit of time: 1mln/210sec (this could be caused by the fact that I am using the out-of-the-ARB-box Riemann Siegel Z-function and not yet the maybe faster 1..4-term approximation from Pugh).

Let's assume we could generate 5mln (likely more) 'labelled' Gram-points per hour and split this into say 10000 Boinc tasks. With the Gram blocks comprising of ~30% of the total Gram points and a record lengths of 15 bytes; this would produce a file of 0.3x10^10 = ~3 Terabyte that should then be fed into the second stage that 'repairs' the gram blocks and completes the final stage. This would imply that 10^11 would already require 30 Terabyte etc. I now start to appreciate why people stopped ~ 10^13...

What still really amazes me, is that Fredrik's contour integral verifies the RH using the 'classical' method in only 30 minutes at T=10^9 (!). I currently can't see how Turing's method (that requires a run through all gram points in a huge loop) could beat that result. So, maybe there is new life in this old method and new opportunities could be explored to stretch it by a few orders of magnitude.

mariotrevi commented 5 years ago

According to http://www.lmfdb.org/zeros/zeta/ , the 127-th non-trivial zero is at 282.46511 and the 128-th is at 283.211185, which would put then both above gram(126) = 282.4547208.

rudolph-git-acc commented 5 years ago

Yes, your numbers are correct, however this is not how the script works. My understanding is as follows:

So, if you enter a bad gram point as your target value, the script will always extend this to the point where it finds the first good gram point (variable gi in the code; gi > end). This explains why the script finds 128 zeros. Or in a picture:

image

The interesting question is why the NO-counting function gives 128 at Gram-point 126. I did read a bit more about this function and it actually suffers from the same bad Gram points. When you plug a bad Gram point into N0 it incorrectly jumps by +/- 1 or +/- 2.

The script's method could therefore be a blessing in disguise. Just counting until the end of a bad Gram block prevents you from ever ending in the middle of a Gram block. The only change to make to the script is therefore to give a warning that the user entered a bad Gram point and this choice has been overruled by calculating the first good Gram point. Then let als0 N0(t) check with this augmented value.

I changed the code accordingly and the nice thing is it also fixed the 10000 anomaly (which turned out to be a bad gram point)!

mariotrevi commented 5 years ago

I think your expression for N0(t) isn't quite correct. With respect to your function N0, you have the term: imag(log(zeta(1/2+It)))/Pi . In Pugh's thesis, page 39, formula 4.1 for N(T) has a corresponding term (1/pi)Im( int{C'} zeta'(s)/zeta(s) ds ) ) where C' is the contour on page 40. For the Gram point gram(195) I get numerically for (1/pi)*Im( int{C'} zeta'(s)/zeta(s) ds ) ) about -1. On the other hand, imag(log(zeta(1/2+I*t)))/Pi for t=gram(195) gives: +1 . N0(gram(195)) gives 197 but according to N(gram(195)) = 195.