km-git-acc / dbn_upper_bound

Computational effort to upper bound the de Bruijn-Newman constant as part of a Polymath project
Other
13 stars 12 forks source link

Adjusted A'+B' estimates #44

Closed km-git-acc closed 6 years ago

km-git-acc commented 6 years ago

Some additions to mputility in the form of Ht_AFE_ADJ_AB which calculates roots using A'+B' for large T (C is small and ignored). Also, its 2x faster than previous functions due to some changes made in how A' and B' are calculated.

fast_zero_calc has been updated to use Ht_AFE_ADJ_AB

There are also some other functions added to mputility such as AB_analysis and adjusted_AB_analysis which allow for comparison of A and B with B0 (and similarly for their adjusted counterparts).

km-git-acc commented 6 years ago

@sujitnair Agreed on the AB_analysis and the related functions. They were being used for comparison against B0 and some derivatives. I will put them in a separate file. Also, just added a function called Ht_Effective which estimates H_t with an error bound (https://terrytao.wordpress.com/2018/02/12/polymath15-third-thread-computing-and-approximating-h_t/#comment-492691) but the error estimate part isn't complete yet (especially the vwf integral). So we can hold off on merging this one right now. Will complete and organize it in the next 1 or 2 days.

dhjpolymath commented 6 years ago

Ok. It may be valuable to test the code with a few runs and compare accuracy.

sujitnair commented 6 years ago

@km-git-acc @dhjpolymath

The existing unittest is running successfully. The command is "python -m unittest" from the root folder.

But the existing test is only for Ht_AFE_ABC

I agree. We should add more.

sujitnair commented 6 years ago

pylint score has gone down though :(

pylint --rcfile=pylintrc dbn_upper_bound/python/mputility.py

sujitnair commented 6 years ago

@dhjpolymath @km-git-acc

I moved the ab_analysis and adjusted_AB_analysis to a separate research folder

km-git-acc commented 6 years ago

I have made some changes to vwf_err. It's giving values between 1 and 2, and closer to 1 as the T height is increased, as per expectations. But the v function is not yet implemented for sigma < 0. That's sigma as in the integral variable and not sigma0 which is just s.real.

Also, I checked the ADJ_AB function with both Ht_Effective and the ABC estimate. For large values (T=1 million or more) I see atleast the first 2 to 3 significant digits matching, which indicates things are on the right track. Will post results of some sample values later. But some testing is required for smaller values of T, before finalizing it.

Also, just wanted to check how should we interpret the 1_sigma<0 part in eqn 4.3 (for the w function) at http://michaelnielsen.org/polymath1/index.php?title=Effective_bounds_on_H_t_-_second_approach Do reply on that.

km-git-acc commented 6 years ago

Also, some issues with my machine and getting fixed so laying low on the computations for a couple of days.

km-git-acc commented 6 years ago

@dhjpolymath

For t=0.4 and z=10 mil Ht_AFE_ABC : 3.83032620444064659232369201123e-1705458 Ht_AFE_ADJ_AB : 3.83033116911543990573939264194e-1705458 Ht_Effective : 3.83032479436943449570731271765e-1705458

All the three are matching for first few digits, so we know the formulas are correct in the script. But we should check their divergence at smaller values.

km-git-acc commented 6 years ago

@sujitnair I will add more descriptions about the functions and input values. I wanted to do it at the top of the script once, so the functions look more elegant.

I have been often running the scripts through pylint, although it seems it's giving low scores primarily due to two resasons. It says a lot of variable names are invalid which confused me. Also it wants spaces even while passing parameters to a function. So instead of exp(a,b) which looks elegant, it wants exp(a, b). The space thing looks good when we are separating two functions, or between operators. Is it possible to tell pylint to treat math formulas differently?

sujitnair commented 6 years ago

@km-git-acc The exp(a, b) vs exp(a,b) is a Python thingy. I think for a lot of people the former looks readable especially when the codebase gets large. See for example the numpy codebase (https://github.com/numpy/numpy/blob/master/numpy/core/fromnumeric.py)

Let's ignore the "variable names are invalid" for now. I think it will help us if we keep the functions etc in sync with Terry's blog.

km-git-acc commented 6 years ago

@sujitnair Anyways the space thing is simple regex stuff, so if its going to help people, I may as well do it. Also, did you get some of the files I had previously shared through Google drive. I have just shared another one - roots for t=0.4 between T=600k to 800k.

sujitnair commented 6 years ago

@km-git-acc Yes, I got your file. I won't be able to get to it until tomorrow. I will add all of them to AWS and send out the link.

rudolph-git-acc commented 6 years ago

@km-git-acc

First of all, impressive job!

I am trying to mirror your code in Pari/GP, so we could maybe compare output later. For the Ht_Effective A+B solution, I can already confirm that I also exactly get:

For t=0.4 and z=10 mil Ht_Effective : 3.83032479436943449570731271765e-1705458

For the epsilon and C-errors, I still struggle however. With your Python-code, do you get a strong match with the Ht_AFE_ABC results? Would you mind posting for t=0.4 and z=10 mil the Ht_Effective, epsilon and C-error that you get?

Many thanks!

mariotrevi commented 6 years ago

On 02/21/2018 05:24 PM, rudolph-git-acc wrote:

@km-git-acc https://github.com/km-git-acc

First of all, impressive job!

I am trying to mirror your code in Pari/GP, so we could maybe compare output later. For the Ht_Effective A+B solution, I can already confirm that I also exactly get:

For t=0.4 and z=10 mil Ht_Effective : 3.83032479436943449570731271765e-1705458

For the epsilon and C-errors, I still struggle however. With your Python-code, do you get a strong match with the Ht_AFE_ABC results? Would you mind posting for t=0.4 and z=10 mil the Ht_Effective, epsilon and C-error that you get?

Many thanks!

I'm migrating to a new quad-core computer: the old one has an unreliable hard drive. It will take me a little while getting set up with Python, git, and PARI/gp  ...

David (mariotrevi)

— You are receiving this because your review was requested. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/pull/44#issuecomment-367497149, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If_7dzLKT335S7jnx2rszV1NkVc5Rks5tXJevgaJpZM4SJbht.

km-git-acc commented 6 years ago

@rudolph-git-acc Thanks. This is the detailed output for z=10 mil and t=0.4

Ht_Effective 3.83032479436943449570731271764e-1705458 eps_err 8.14818570785627419909480112894e-1705464 C 8.43123750102841233858802560668e-1705462 |eps_err/H| 0.00000212728323191654185112039120828 |C/H| 0.000220118082764738519705948169394 (|eps_err|+|C|)/|H| 0.000222245365996655061557068560602528

Ht_AFE_ABC 3.83098636939069809505839426053124e-1705458 |C| from A+B-C 6.60164954687343598277159633996e-1705462

In the Ht_effective estimates, C dominates eps_err

the match between Ht_Effective and the ABC estimate is good. The 'C' terms in both seem to be close but different.

km-git-acc commented 6 years ago

@mariotrevi

I too had an issue the last few days..but with my monitor screen which I just got replaced :)

mariotrevi commented 6 years ago

@KM,

On 02/22/2018 11:57 AM, KM wrote:

@mariotrevi https://github.com/mariotrevi

I too had an issue the last few days..but with my monitor screen which I just got replaced :)

I see.

The hard drive of the old computer wouldn't boot CentOS 6, so I did an install on a USB key. After 14 days, the hard drive is bootable again. But I figure the HD problem could happen again; so I'm getting set up on a new computer and gradually installing software...

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/pull/44#issuecomment-367746949, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If_5luNqpcv5N5-OmjzAf-BhAEapfks5tXZxhgaJpZM4SJbht.

km-git-acc commented 6 years ago

@sujitnair I added a lot of spaces, and also input output descriptions to some of the newer functions. I think we are now ready for merging. Please check.

rudolph-git-acc commented 6 years ago

@KM,

Thanks for your response. Much appreciated. Good luck with both your hardware issues. These things can be really cumbersome!

I can reproduce Ht_AFE_ABC exactly and am getting very close on the eps_err en C-(vwf) error of the Ht_effective one. I am using the standard intnum function of pari/gp to evaluate the vwf-integral (ranging from -6 to 6) and I suspect that the (small) difference is somewhere in there. Could you therefore maybe also provide me with the v, w and f-values for the same s,t and for a sigma of say 2 and -2 ?

Many thanks!

km-git-acc commented 6 years ago

@rudolph-git-acc

t=0.4, z=10**7, s = 0.5 + i(5 mil)

With sigma = -2 v 1.00177157663460288183309574497609 w 1.00000001666686176250322018247189 f 0.000000146064201296204703410811083149096

with sigma = 2 v 1.03632382202427253932784839833271 w 1.00000001666680842916564694320901 f 0.00321727813369661576542205738781236

Also I have used a EM summation to approximate the integral with h=0.01 for now. Also there is the division by 8 at the end to get the main estimate and errors for H, if not already accounted for

rudolph-git-acc commented 6 years ago

@km-git-acc

Many thanks for your help. I have got the code working now and get (to the 6th digit) the same results as you provided!

Few observations: 1) I went back and forth a lot between Terry's Wiki-formulae, your code and my pari/gp code. I can reassure you that your code is very robust. Each time I euphorically thought that I had spotted an error, it literally always sat in my own code in the end... :-) 2) For the eps_err you used the zeta-based bound (3.8). I did the same, but also tried equation (3.6) to check whether it would maybe improve the error estimate at lower x. However, did not get any better outcomes (need more testing). 3) The vwf-integral works fine and indeed quite rapidly converges to 1 for increasing x. I did however note that when I increase the limits of the integral (I have now used -6 to 6), there is an upwards drift in the 4th digit of the overall outcome. 4) Since I didn't get a perfect match on all the digits of the numbers you provided, I did run a comparison on some numbers between pari/gp, Mathematica and mpmath (see below for the f-value). Pari/gp and Mathematica always seem to agree, so could there maybe be a precision setting in mp.math to improve on?

With sigma = -2, f-value 0.000000146064201296204703410811083149096 (mpmath) 0.000000146064201296204707464907066962337 (pari/gp) 0.000000146064201296204707464907066962337 (Mathematica)

with sigma = 2, f-value 0.00321727813369661576542205738781236 (mpmath) 0.00321727813369661585471946390413825 (pari/gp) 0.00321727813369661585471946390413825 (Mathematica)

I do get similar errors on the other numbers, however the diverging digit seems to differ e.g. for Ht_AFE_ABC: 3.83098636939069809505839426053124 e-1705458 (mpmath) 3.830986369390698640627135966374674 E-1705458 (pari/gp)

km-git-acc commented 6 years ago

@rudolph-git-acc

Thanks. Getting a match with Mathematica is definitely a plus, so quite likely your numbers are more accurate.

I increased the precision in mpmath to 40 from 30 but did not notice a difference.

sigma = 2 0.00321727813369661576542205738781236 30 0.00321727813369661576542205738781236955673406 40

sigma=-2 0.000000146064201296204703410811083149096 30 0.0000001460642012962047034108110831490856319810831 40

It's difficult to understand where the difference might be arising from. Possibly internal to the mpmath library. Interestingly, I noticed when I change the s and t values to mp complex (which is the type they exist in while computing H_t), the f value changes slightly.

0.003217278133696616770017880696478710997410034

Part of the reason could what happens to the inputs after getting changed to mp complex

t=0.4 t=mp.mpc(t) t mpc(real='0.4000000000000000222044604925031308084726334', imag='0.0')

after several decimal digits, some phantom digits get added, which may influence the lesser significant digits in computations. But that said, mpmath has worked out well so far since the first 12-15 significant digits are generally correct.

rudolph-git-acc commented 6 years ago

P.S. Wondered about this piece in your definition of wterm3:

(...)nonnegative((T0dash/2.0) * mp.atan(sigma/T0dash) - sigma/2.0)(...)

With x=(T0dash/2.0) * mp.atan(sigma/T0dash) - sigma/2.0, then when x >= 0 it currently adds the term and for x < 0 it does add 0. However, in Terry's formula for w it seems that the term should only be counted when sigma < 0 (and not when x >= 0). Or do I misread the formula?

rudolph-git-acc commented 6 years ago

@km-git-acc

It could indeed be the change to mp complex. I have also tried the f(s) formula in Maple today and do get exactly the same results as for pari/gp and Mathematica.

Have you tried increasing the mp.prec parameter?

mp.prec = 53 [default: 53] mp.dps = 15 [default: 15]

"The term prec denotes the binary precision (measured in bits) while dps (short for decimal places) is the decimal precision. Binary and decimal precision are related roughly according to the formula prec = 3.33*dps. For example, it takes a precision of roughly 333 bits to hold an approximation of pi that is accurate to 100 decimal places (actually slightly more than 333 bits is used)."

km-git-acc commented 6 years ago

@rudolph-git-acc

The non-negative thing is the same thing in practice. When sigma<0, T0dash/2.0) * mp.atan(sigma/T0dash) - sigma/2.0 goes positive and should be counted, while for sigma>0, it is negative and shouldn't be counted, since we want a conservative error estimate.

The x in non-negative is a dummy function variable and not related to the x in z=x+iy. I thought it's better to create a generic non-negative function, rather than one specific to sigma and T0dash.

mp.prec and mp.dps are tied at the waist. Trying changing one changes the other.

mp.dps=40 mp.prec 136 mp.prec=100 mp.dps 29

rudolph-git-acc commented 6 years ago

@km-git-acc

Got it. I had already understood the x was just a dummy variable.

You are also right that the prec and dps are directly connected. I searched the web a bit about what could cause these strange phantom bits (I get them as well in mpmath in Sagemath), however have not yet found anything useful. Will keep digging.

rudolph-git-acc commented 6 years ago

@km-git-acc

Could the following maybe the issue? Whilst mpmath has excellent floating point arithmetic, Python only stores the numbers correctly to about 16 or 17 digits. In the link below under "1.14.3. String Formats for Float Precision", this is explained with some warnings.

https://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/float.html

Since you use Python variables to store output from mp. arithmetic functions, could that maybe explain why numbers get chopped off and exactly after 16 digits the phantom digits emerge?

km-git-acc commented 6 years ago

@rudolph-git-acc This could possibly be the cause.

t=mp.mpc('0.400000000000000000000000000000000000000000') t (0.4 + 0.0j) t=0.4 t=mp.mpc(t) t (0.4000000000000000222044604925 + 0.0j)

On declaring t the proper way, the phantom digits go away. I will look into this further. It may not be too difficult to use mpf and mpc numbers from the very beginning of any exercise. Also just added you as a collaborator. Also @mariotrevi Since both you and Rudolph use Pari/GP and it handles precision better than python, it may be a good idea at some point to share your codes in a repo or a folder. Will also start reading up on it.

km-git-acc commented 6 years ago

@rudolph-git-acc

Indeed. On calculating f using mpmath numbers, the match is much better but still not exact.

z=mp.power(10,7) s = 0.5(1+1jz) t = mp.mpf('0.4') s,t=mp.mpc(s), mp.mpc(t) f(2,s,t).real

with sigma = 2, f-value 0.00321727813369661576542205738781236 (mpmath) 0.00321727813369661585471946390414151 (mpmath with less phantom) 0.00321727813369661585471946390413825 (pari/gp) 0.00321727813369661585471946390413825 (Mathematica)

It may be much more difficult to eliminate the rest of the difference, since all the formulas and sub formulas in Ht_Effective will have to be changed accordingly. I will try that out later.

rudolph-git-acc commented 6 years ago

@km-git-acc

I actually believe your (mpmath with less phantom) number is correct, since you used mp.dps=30 digits. I used 60 digits and then truncated it (sorry for not making this clear)!

Here are my numbers at 60 digits, so you can compare:

z=10000000, t=0.4

Ht AB: 3.83032620444064633903072346612967544654930565264813659295612 E-1705458

|Ht ABC|: 3.83098636939069864062713596637467419401602127758348955614121 E-1705458

|Ht C|: 6.60164954687343600537701592038229482032014739419468769987764 E-1705462

Ht effective including all errors: 3.83117599997699425576516008990441339921369771588905458931238 E-1705458

Ht effective AB: 3.83032479436943424241473809816502740392587397148294232939304 E-1705458

|Ht effective eps_err|: 8.14818570785627302371245049010916138459840742142898385080134 E-1705464

Ht effective C(vwf)_err: 8.43057421852157077398279288895886126439145998690830935482596 E-1705462

v -2,2 1.00177157663460288160253745357974340501302024752386448243361 1.03632382202427253731192321674121218443061119603352329239458

w -2,2 1.00000001666686176250322018247156393091777811440933648671753 1.00000001666680842916564694320886232967274725807315921022116

f -2,2 1.46064201296204707464907066962337983365453449720650304728128 E-7 0.00321727813369661585471946390413825634986677983059173317822488

vwf-integral 1.00232598488152295472939062853471650815436374127261522759829

km-git-acc commented 6 years ago

@rudolph-git-acc

I got mixed results. Some of the estimates like Ht_ABC, Ht_AB, Ht_C, Ht_effective match to more than 50-55 digits, f matches exactly, while others diverge to different degrees. Also, the vwf error and C(vwf) estimates depend on a lot of other things like limits used, method of integration used, etc.

v -2,2 1.00177157663460288160253745357974340501302024752386448243361 Pari/GP 1.00177157663460288183309574343021598849281686423579964688398 mpmath 1.03632382202427253731192321674121218443061119603352329239458 Pari/GP 1.03632382202427253932784836665635602878617282837259489458642 mpmath

w -2,2 1.00000001666686176250322018247156393091777811440933648671753 Pari/GP 1.0000000166668617625032201824716157413365517913592426007911 mpmath 1.00000001666680842916564694320886232967274725807315921022116 Pari/GP 1.00000001666680842916564694320891414009152093225984281608157 mpmath

f -2,2 1.46064201296204707464907066962337983365453449720650304728128 E-7 Pari/GP 1.46064201296204707464907066962337983365453449720650304728128 E-7 mpmath 0.00321727813369661585471946390413825634986677983059173317822488 Pari/GP 0.00321727813369661585471946390413825634986677983059173317822488 mpmath

vwf integral 1.00232598488152295472939062853471650815436374127261522759829 Pari/GP 1.0024048437201610270348438404185064546556155487052131884193 mpmath (lim=-10 to 10, h=0.01)

Ht_Effective AB 3.83032479436943424241473809816502740392587397148294232939304 E-1705458 Pari/GP 3.83032479436943424241473809816502740392587397148294231078561e-1705458 mpmath

|Ht effective eps_err|: 8.14818570785627302371245049010916138459840742142898385080134 E-1705464 Pari/GP 8.14818570785627296581455875360230093066019944819328004987532e-1705464 mpmath

Ht effective C(vwf)_err: 8.43057421852157077398279288895886126439145998690830935482596 E-1705462 Pari/GP 8.43123750102841236686903117911826061940723625822972468658956e-1705462 mpmath

Ht AB: 3.83032620444064633903072346612967544654930565264813659295612 E-1705458 Pari/GP 3.83032620444064633903072346612967544654930565264813659215282e-1705458 mpmath

|Ht ABC|: 3.83098636939069864062713596637467419401602127758348955614121 E-1705458 Pari/GP 3.83098636939069864062713596637467419401602127758348955533735e-1705458 mpmath

|Ht C|: 6.60164954687343600537701592038229482032014739419468769987764 E-1705462 Pari/GP 6.60164954687343600537701592038229482032014739419468769440588e-1705462 mpmath

As you mentioned earlier, even a slight interaction between Python number types and mp number types can start causing a divergence, so one has to be extremely careful at all steps. I will have to check what did I do with f to get exact results.

rudolph-git-acc commented 6 years ago

@km-git-acc

I believe a divergence at about 16/17 digits is an indication that a Python variable is probably causing the issue. This seems to be the case for v and the |Ht effective eps_err|.

You are right about differences in the vwf-integral since we do use different limits and methods of integration.

For a slight divergence at the tail end like in the Ht AB, ABC and C formulae, I believe these will probably disappear (or shift further to the right) when we increase both our dps settings to e.g. 90. Pari/Gp results below:

Ht AB: 3.83032620444064633903072346612967544654930565264813659295612223839875237406805301937485795 E-1705458 Pari/GP

|Ht ABC|: 3.83098636939069864062713596637467419401602127758348955614120524104033465715296493348091050 E-1705458 Pari/Gp

|Ht C|: 6.60164954687343600537701592038229482032014739419468769987763652363599618363748659099702415 E-1705462 Pari/Gp

rudolph-git-acc commented 6 years ago

The difference in w looks strange and occurs around the 30th digit. Just to be sure about the Pari/Gp outcome, I reprogrammed w in Maple and the results correspond with Pari/Gp:

sigma=-2: 1.0000000166668617625032201824716157413365517913592426007911 mpmath 1.00000001666686176250322018247156393091777811440933648671753 Maple 1.00000001666686176250322018247156393091777811440933648671753 Pari/Gp

sigma =2 1.00000001666680842916564694320891414009152093225984281608157 mpmath 1.00000001666680842916564694320886232967274725807315921022116 Pari/GP 1.00000001666680842916564694320886232967274725807315921022116 Maple

Here is also Ht effective AB at 90 digits for comparison: 3.83032479436943424241473809816502740392587397148294232939304170442267113296119588671994114 E-1705458

P.S. I have also tried to program the triangle inequality (the one for which you got 0.976 at x=pi*10^6), but struggle with that absolute multiplication factor for each sum. Is it indeed the case that 1 {n <=N} means that it should be 1 when n <= N and 0 otherwise? Similarly for 1{ 2|n}, does this mean that it is only 1 when n divides 2? Grateful for you steer on this piece.

km-git-acc commented 6 years ago

@rudolph-git-acc Yeah, that's the correct interpretation. You can also check the deltaN and divdelta functions in the adjusted_AB_estimates branch (at python/research/ab_analysis.py)

There is also a function there called xmultibound (for lack of a better name) which evaluates the inequality with various prime like factors. Right now it divides the factors at the end. If you remove that division operation, you should get finalsum2 as 0.976 at pi*10^6.

km-git-acc commented 6 years ago

@rudolph-git-acc

I just put in a new vwf function (as seen in the latest commit) and have checked the values with the older one (although the behavior at the lesser significant digits may change). Primarily for speed. For t=0.4 and z=10 mil, Ht_Effective finishes computing in approx 1 sec, compared to 1.5 sec earlier.

I had noticed that the error computation takes much longer than the main estimate. The main bottleneck is the sigma < 0 part in the v function, which computes a long power*gamma sum, and this happens for every step in the integral. But there is a finite number of possible sums that can come out, and those can be precalculated. The older vwf function is still kept if needed for a deeper analysis.

From the perspective of large scale calculations of H_t, if we skip error calculation in Ht_eff, and just calculate the main estimate, it computes in just 0.3 seconds (so 3 times faster, and 5 times faster than earlier) for z=10 mil and t=0.4. But skipping the error part while trying to establish bounds for eg. for |Ht-A-B| may not be a good idea. If we have a rigorous bound on the error part, then maybe we can skip it for larger values of z, where the error is anyways small. If we check Terry's comment here https://terrytao.wordpress.com/2018/02/12/polymath15-third-thread-computing-and-approximating-h_t/#comment-492932, he mentions that err3 (the C term) is 1/x^((1+y)/4) compared to B0. Is this a usable bound?

Update: On the other hand, if we do calculate the C error, but set the limits and h in vwf_err lower and coarser, we can still get a good speed boost. For eg. changing (lim,h) in vwf_err from (10,0.01) to (5,0.05) results in a 0.35 second evaluation time which is almost there compared to the 0.3 second estimate. Then the key question is what is a good value of (lim,h) to achieve a balance of accuracy and speed. Even with this coarser evaluation of vwf_err, the % error does not change for the 3-4 significant digits, and thus the actual error for the first 5-6 significant digits.

rudolph-git-acc commented 6 years ago

@km-git-acc

Thanks for sharing the deltaN code. I finally found the issue and it was the missing 1/N^(0.4) factor that sits before the second sum (it is actually not included in Terry's latex-display).

I also did some timing on the various functions in Pari/Gp and got the following results for t=0.4, z=10 mil and the Ht_Effective function including all error terms at 60 digits accuracy. I used the Sagemath "timeit" function:

25 loops, best of 3: 31.9 ms per loop - A(s) 625 loops, best of 3: 566 µs per loop - eps_err(s) 625 loops, best of 3: 8.11 µs per loop - v(2,s) 625 loops, best of 3: 15.7 µs per loop - v(-2,s) 625 loops, best of 3: 24.1 µs per loop - w(2,s) 625 loops, best of 3: 24.6 µs per loop - w(-2,s) 625 loops, best of 3: 15.3 µs per loop - f(2,s) 625 loops, best of 3: 15.8 µs per loop - f(-2,s) 25 loops, best of 3: 23.4 ms per loop - vwf(s) (integral with limits -6 to 6) 25 loops, best of 3: 23.5 ms per loop - C(s) 5 loops, best of 3: 87.7 ms per loop - Total run

As you already noted, the v-function takes twice as long for negative sigma because of the long power*gamma sum. I like your idea of using a pre-calculated cache to mitigate this.

Pari/Gp already performs pretty fast (also its "intnum" numeric integral function). However, the vwf-integral does feel quite cumbersome from a processing perspective and my first thought was to to just put it to 1 (it's value at infinity). But then I realised that we have to do the massive computations below say x=10^7 or x=10^6 (depending on where the bound lands) to ensure a large area is zero-free for all x (and y) below that. Since the error terms become more influential for lower x, they are important and every micro-second gained on their calculation process will help!

mariotrevi commented 6 years ago

@KM , @rudolph-git-acc https://github.com/rudolph-git-acc

On 02/23/2018 02:31 PM, KM wrote:

@rudolph-git-acc https://github.com/rudolph-git-acc This could possibly be the cause.

        t=mp.mpc('0.400000000000000000000000000000000000000000')
        t
        (0.4 + 0.0j)
        t=0.4
        t=mp.mpc(t)
        t
        (0.4000000000000000222044604925 + 0.0j)

On declaring t the proper way, the phantom digits go away. I will look into this further. It may not be too difficult to use mpf and mpc numbers from the very beginning of any exercise. Also just added you as a collaborator. Also @mariotrevi https://github.com/mariotrevi Since both you and Rudolph use Pari/GP and it handles precision better than python, it may be a good idea at some point to share your codes in a repo or a folder. Will also start reading up on it.

Yes. I've used Pari/gp quite a lot interactively. Typically, I save my code or constant and function definitions in a text file for later use.

I could run Rudolph's code and vice versa. Pari/gp also allows writing output or computations to a file.

There's a quick-reference to Pari/gp functions:

https://pari.math.u-bordeaux.fr/pub/pari/manuals/2.9.0/refcard.pdf

and a manual:

https://pari.math.u-bordeaux.fr/pub/pari/manuals/2.9.0/users.pdf e.g. Chapter 2: The gp calculator.

When running the gp calculator, ? [ENTER]

gives a list of topics for help with.

===

We might have a folder on the git repo with files of code and code documentation , and sample output, etc.

I can do "git clone", but that's about it.

I've found PARI/gp to be very powerful, with a C-like syntax and keywords.

I saw Rudoplh's code in comments on Terry's blog, although I haven't experimented with it.

km-git-acc commented 6 years ago

@everyone..I am merging this branch into the main branch, since there is a lot of new stuff in this one and new participants don't find it easy to find non master branches. Any bugs we discover will have to be fixed later.

sujitnair commented 6 years ago

👍

km-git-acc commented 6 years ago

@mariotrevi To be fair, even I don't use the git command line consistently. Sometimes I just copy paste working code into the browser editor.

rudolph-git-acc commented 6 years ago

@km-git-acc

It took me quite a while to find this thread again after the merge ;-)

You probably have spotted this yourself already, but just to be sure that you also caught Terry's comment about a small sign error that he fixed:

    T0dash = T0 - mp.pi() * t / 8.0  should be   **T0dash = T0 + mp.pi() * t / 8.0**
    Tdash = T - mp.pi() * t / 8.0 should be **Tdash = T + mp.pi() * t / 8.0**

I believe it is in lines 316, 328, 329, 371 and 396 of mputility.py.

km-git-acc commented 6 years ago

Had missed this. Updated mputility.py