km-git-acc / dbn_upper_bound

Computational effort to upper bound the de Bruijn-Newman constant as part of a Polymath project
Other
13 stars 12 forks source link

Potential divide and conquer strategy? #50

Open km-git-acc opened 6 years ago

km-git-acc commented 6 years ago

@rudolph-git-acc and @mariotrevi

Few days back, I had noticed in the fourth thread at Prof. Tao's blog you were focused on the Ceff term, and improving Ht precision at smaller values of x, while I have been working more on the medium values of x.

This could be a good divide and conquer strategy in light of the fifth thread. Do let me know if there are any thoughts on this, and if we can help each other more.

rudolph-git-acc commented 6 years ago

@km-git-acc

Just saw it. It will cover the x>20 domain! Will try the F-part out as well.

rudolph-git-acc commented 6 years ago

@km-git-acc

Preliminary results with t=y=0.4, integral limit at 6, for respectively: 1/8*sum ( |F'(s,n)| ) and its bound:

x=20 0.00193393956090491570301473275433 0.0124093570563450402996799361614

x=200 8.36417131631616309205902239861 E-32 3.02262609905847635876162457407 E-31

x=2000 1.58772961653514206399706398636 E-336 9.91713765273269691211128328421 E-336

x=20000 3.47008792201059759088864046895 E-3404 6.21034183832384995158736944793 E-3403

km-git-acc commented 6 years ago

@rudolph-git-acc Our numbers aren't matching yet. Attaching my parigp formulas for comparison. H_t_derivative bound.txt

x,1/8sum(|F'(s,n)|),1/8sum (|F'(1-s,n)| ),col1+col2 20, 0.0617935217127134467917619842663, 0.0674971398183243884034531632167, 0.129290661531037835195215147483 200, 3.30413977387079295900771672478 E-30 7.67957965105973780026451901666 E-30 1.09837194249305307592722357414 E-29 2000, 2.70804042738216785634062762825 E-334 1.62701216731543437800246920195 E-333 1.89781621005365116363653196478 E-333

rudolph-git-acc commented 6 years ago

@km-git-acc

Have tried to reconcile the numbers and although I am getting pretty close, it isn't exact yet. Given the latest approach to bound Ht and its derivate that was just posted by prof Tao, is this current approach we were working on now obsolete?

km-git-acc commented 6 years ago

@rudolph-git-acc Possibly the new approach works even for very small x and hence it should be used. The advantage of the I_theta function always was that it is less oscillatory and hence easier to integrate, with suitable choice of theta. But earlier we didn't have it's derivative or derivative bound, which we have now.

km-git-acc commented 6 years ago

@rudolph-git-acc

I went through the wiki page. It seems very reasonable values of n0 and X will meet the conditions (for eg. n0 -> 1, X=2), although maybe we should check with higher values as well to see where the tail estimates become negligible.

Is my interpretation correct that for H_t there will 4n0 integral estimates required, and there will be 4 + 4n0 tail estimates (similarly for H'_t), with the first 4 tail estimates coming from the tail of the sum, and the remaining 4*n0 corresponding to the individual integrals?

EDIT: I am always a bit nervous while using oscillatory integrands. Maybe as a sanity check, we should compare our H_t estimates with effective A+B-C, and then the H'_t estimates with the newton quotient of H_t or A+B-C.

rudolph-git-acc commented 6 years ago

@km-git-acc

Also started to code the new approach, however so far with mixed results. It works pretty well for low x, however I still need pretty high dps settings to cover x > 200 and also the tail estimates quickly become very small. Need to explore and test it a bit further before I commence working on the derivative-version. If you would have some test values to share, that would be great!

The way I understand the tail-estimates is that there is a tail bound for the integral (3) covering (X-cutoff -> infinity, n) and one for the series 2x(4) covering (n0-cutoff -> infinity, X-cutoff=0). Each of these bounds has four different values for alpha i.e 9-y, 9+y, 5+y, 5-y.

I then just added the integral bound to the definition of I(b, beta) (that runs from itheta to Itheta+X-cutoff) and re-derived the required 'a' as i*(b-x). For the series-bound, we also need a separate bound for each alpha that each should be added as an extra term to the series (that runs from n=1 to n0).

EDIT: I only just now spotted the subtle difference between an 'a' and an 'alpha' in the equations...:-) here is my current code: Code for Ht estimates.txt The integral bound is very small and probably not right yet.

Fully agree on running the proposed sanity checks once it all works.

P.S. I have now also produced the N=151 to 300 mesh in two batches (without the ceil and floor). These could maybe be used later for a final cross check with your data.

N=151-250: https://drive.google.com/open?id=1UFDBGsyybCUKD3owHBmTOoEl8m7jYDrD N=251-300: https://drive.google.com/open?id=1ks3OoDSVTVDzKI6dD94Yuvp1v-Fka0Wx

km-git-acc commented 6 years ago

@rudolph-git-acc Attaching my parigp script so far. Numbers not close to matching yet.

H_t_derivative bound.txt

I will process the mesh data you have generated, but the mesh points are all different, since I used a different starting point (integer xN) for all N. These could possibly be posted as additional links.

rudolph-git-acc commented 6 years ago

@km-git-acc

Corrected a few errors in my code and the sum tails fully match now. Integral tail next.

km-git-acc commented 6 years ago

@rudolph-git-acc

I was able to match my main estimate with both the earlier integartion based on the phi function and sufficiently well with A+B-C. But it only works for x<=49.95. As soon as I take it slightly above that, both the integrals start diverging from A+B-C, (though they remain equal to each other)!!

rudolph-git-acc commented 6 years ago

@km-git-acc

What happens when you increase the dps (\p) to a much higher precision? Does it than still match for higher values?

I noted that your Integral bound is also extremely small and hardly does anything for the result. Is that correct?

km-git-acc commented 6 years ago

@rudolph-git-acc Increasing the dps doesnt change the divergence. The significant digits remain the same. There is a parameter fl within the integration procedure, any idea what that does?

Also, there is one line change in the file I attached earlier. main_est2 = sum(n=1,n0-1,3Pi(n^2)(I_t_th(t,th,X,x-(5-y)I,Pin^2) + conj(I_t_th(t,th,X,x-(5+y)I,Pin^2)))); instead of main_est2 = sum(n=1,n0-1,3Pi(n^2)(I_t_th(t,th,X,x-(5-y)I,Pin^2) - conj(I_t_th(t,th,X,x-(5+y)I,Pin^2)))); (essentially, the sums had to be added but i subtracted them accidentally) after which the estimate started matching with the earlier integration and A+B-C atleast for x<=49.95

the integral bound is small because i chose n0=4 when even n0=2 would have been ok. It allows us to focus just on the main term at the cost of some extra computation.

km-git-acc commented 6 years ago

@rudolph-git-acc I think this page will be helpful. https://pari.math.u-bordeaux.fr/dochtml/html/Sums__products__integrals_and_similar_functions.html Also will check whether the same behavior is exhibited by mpmath

rudolph-git-acc commented 6 years ago

@km-git-acc

I do get a reasonable match for x>49.95 and the integral isn't diverging in this domain. The results improve when I increase the dps. However, I am actually now more concerned about the tail-bounds (both for the series and integral) being so small compared to the exact Ht value, that we could just as well completely ignore them (esp the integral tail). I also noticed that when I increase X > 12 (the n0 can be increased without a problem and improves the result as well), I end up in an exp-overflow and a warning to increase the stack size of pari.

km-git-acc commented 6 years ago

@rudolph-git-acc

There is a exp(4*X) factor which gets very large. So X has to be kept in single digits, preferably on the lower side. I have been to get the correct value of the Ht derivative (since it matches with the newton quotient). But there are still problems at larger values of x.

Attaching my updated parigp script (the tail bounds of the derivative are not correct) H_t_derivative bound.txt

These are some sample values at y=t=0.4 (all normalized by B0_effective)

x= 25 Ht = 0.687520266523989949821594534327 + 0.650505854534226840653931616453I A+B-C = 0.676247027447357880552684258870 + 0.629997592365837099509958528471I ddx_Ht = -0.357033773640538093924285657523 - 0.400827877698506125034250863669I nquot = -0.357027781561008089543217111893 - 0.400820460730628493847149789085I

x=47 Ht = 1.17323863487024910463497231943 + 0.300011603296324719365254084656I A+B-C = 1.16592918834450849473982297002 + 0.306086323449911189553904914259I ddx_Ht = -0.583420861890336402670422683562 - 0.240920194017293972542830084177I nquot = -0.583416671748458690808520245827 - 0.240916293043528705283267945654I

now diverging x=51 Ht = 0.339195983967800515387991633235 - 0.727326822862516078273306833300I A+B-C = 0.443149647303681401640744759852 - 1.75280992654907498558425330729I ddx_Ht = -0.0544928824235872501091741078525 - 0.379482857523786127066892801851I nquot = -0.0544947263671841196784266841462 - 0.379464172858153988457833111120I

now again converging x=123 Ht = 0.287191994398419095630615049713 - 1.32023629030506653523486361641I A+B-C = 0.283119398987509824094798418048 - 1.32704526966061161132857450775I ddx_Ht = -0.200163062207940406294984362682 - 0.529709317537846765032058589306I nquot = -0.200193677067724534068572272475 - 0.529772817501551094222386794222I

now seriously diverging (and appearance of large values) x=235 Ht = -48.1104419787169086639634127566 + 9.28226778042892755922701621436I A+B-C = 2.51150331948280796772452271349 + 0.0356656583982343479014232224636I ddx_Ht = 46.5186425890894576560866441298 + 3.13153474486257566045046260704I nquot = 47.6517998395821375665044490518 + 2.87125577122055472704954388874I

km-git-acc commented 6 years ago

@rudolph-git-acc Increasing the dps precision starts changing the Ht values but they do not match. Eg. changing dps to 250 (showing only few digits) x=235 Ht = 1.98553 + 0.01992I A+B-C = 2.51150 + 0.03566I

rudolph-git-acc commented 6 years ago

@km-git-acc

I will check as well. One thing I noted in prof. Tao's post is that he suggested the n0 and X cutoffs to be chosen as multiples of a certain factor depending on x. Maybe there is something in there that causes the divergence?

EDIT: try a dps of 800 in your example for x=235. It really needs to be much higher than the x-value. Does this make a difference?

km-git-acc commented 6 years ago

@rudolph-git-acc Well I still got the same values. I tried using n0 = max(4,3floor(sqrt(x/(4Pi)))) and X = log(x), but haven't seen much improvement. Also simplified and eliminated some redundant parts in my script. H_t_derivative bound.txt

Maybe things are working better on your side, but if we both face problems, I am thinking is it better to use A+B-C, its derivative, and corresponding error estimate of [H-(A+B-C)]/B0 for x=100 to 1500, leaving the integral approach only for the ultra small values?

EDIT: Suppose as an experiment, if you evaluate (Ht using the integral)/B0 and (A+B-C)/B0 on 1500 integer points 1 to 1500, in how many cases and upto what height would you say they are reasonably matching? I will set my expectations accordingly.

rudolph-git-acc commented 6 years ago

@km-git-acc

Below is the simple piece of code that I used to compute the exact Ht(z) value in steps of 0.1 up to x=300. I have now expanded it to calculate the exact values for Ht'(z) and therefore could easily produce similar files for the derivative. Above x=300, I need to set the dps to a very high numbers and the computation gets very slow (I could maybe stretch it to 500). However, user Anonymous has apparently already found a way to control the integral (other than prof Tao's method) and has calculated Ht(z) exactly up to x=1300 (so, we are nearly there). He or she hasn't revealed though how it was done and which software was used. Therefore another way to cover x=0..1500 would be to just start computing H'(z) exactly in steps of 0.1 up to 300, publish the results and then ask Anonymous to complete the remainder? Then we can run the mesh and close it off. What do you think?

\ensure precision is set 3 to 4 times higher than the x-value default(realprecision, 300) intlim=6 t=0.4

\calculate B0 normaliser T(s)=imag(s) ALF(s)=1/(2s)+1/(s-1)+1/2log(s/(2Pi)) H01(s)= (s/2)(s-1)Pi^(-s/2)sqrt(2Pi)exp((s-1)/2log(s/2)-s/2) B0(s)=1/8exp(t/4ALF(1-s)^2H01(1-s)

\evaluate the integral OO(u)=4sum(n=1,intlim,(2n^4Pi^2exp(9u)-3n^2Piexp(5u))exp(-n^2Piexp(4u))) t1(z)=intnum(u = 0, intlim, exp(tu^2)OO(u)exp(-2Izu)) t2(z)=intnum(u = 0, intlim, exp(tu^2)OO(u)exp(2Izu)) ttt(z)=1/8(t1(z/2)+t2(z/2)) normint(z)=ttt(z)/B0(1/2+z/2*I)

\take the derivative ttd(z)=ttt'(z) normind(z)=ttd(z)/B0(1/2+z/2*I)

\generate output and round of at 20 digits print(normint(51+0.4I)precision(1., 10)) print(normind(51+0.4I)precision(1., 10))

0.3391959785062920948 - 0.7273267980079029375I -0.05449287721514581094 - 0.3794828592582988596I

EDIT: P.S. Are you sure the nquot for x=235 is correct? I do get a very different value for the derivative at that point. Ht(235+0.4i) = 2.259885303055990897 + 0.027434060705810383774I Ht'(235+0.4i) = -1.0137557154299118760 - 0.3865983075381848008I EDIT; ignore this comment, just saw from your code that 'nquot' is the Newton quotient based on the integral and not on the AB-C_eff formula.

rudolph-git-acc commented 6 years ago

@km-git-acc

Got my code for the Ht and Ht' estimates working pretty well, although I still wonder what the contribution of the integral and the series tails is for the estimates.

Here it is: (EDIT: made small amendment to make inputting x easier))

Code.for.Ht.and.Ht-derivatiive.estimates.txt

I went through your code in detail, however have not yet been able to identify why it sometimes does converge and sometimes doesn't. There clearly is something very subtle going on. Will take another deep dive.

rudolph-git-acc commented 6 years ago

@km-git-acc

I believe I have found two errors and the results of your code now look very good (note that you do also have to increase the dps for higher x-values). One reason for the observed divergence was a very subtle typo in the definition of the C-term in the ABC formula and the other one was in a double definition of the integral itself (I have commented out both lines, so you can see where the issue was).

KM code with errors fixed.txt

km-git-acc commented 6 years ago

@rudolph-git-acc Amazing. I was literally going mad figuring it out. The Ht and ABC estimates now match and indeed as x goes higher, higher precision allows the matching to continue.

I was able to match the values by keeping precision=x, so we may have some room there (although maybe we should keep it somewhat higher than x to be sure). But couldn't figure out a way to set the precision dynamically as the first line in a function. I tried the localprec procedure, but that didn't seem to work.

This ran in 13 minutes. x=1500 Ht = 0.7392035097990081081 + 0.6052210347079221874I ABC = 0.7390201991188104438 + 0.6003548563936863411I

10-13 minutes with the mesh of size 0.1 (for lets say x between 1000 and 1500) is a scary prospect. I agree Anonymous either has serious computing power at his/her disposal or has figured out a good optimized tool/technique. Has he/she shared it yet?

I agree we can start with the mesh atleast for smaller values of x. How do you want to divide it? Although I think we have the derivative but not the derivative upper bound for H't yet, so we will have to wait for that to make our mesh exercise rigorous. Maybe we should post on the blog that we can compute Ht and Ht derivative correctly now, and then wait for Prof. Tao's reply.

km-git-acc commented 6 years ago

@rudolph-git-acc Another thing I noticed is that keeping n0 fixed (or maybe even X fixed, but we cant have X larger than a certain value), as we increase x the error increases exponentially faster than the main estimate. So what starts out as a 10^-40 type error at small x soon becomes 10^-5 and then starts competing with the main estimate. We may have to keep n0 dynamic as k log(x) or k sqrt(x/4pi) as you mentioned earlier.

km-git-acc commented 6 years ago

@rudolph-git-acc I was going through the wiki again, and in the last section there is a discussion that if we fix theta (and not just n0 and X), we can get uniform estimates for the derivative for the entire interval which would act as a upper bound. With the definition of theta currently as x increases, theta increases gradually and approaches pi/8. Should we take theta as large as possible (but still smaller than pi/8), n0 and X as large as feasible and then calculate the derivative bound? (Or the value of theta,n0,X which would lead to the most conservative bound) Maybe a way to check would be if the ddx_bound/exp(-pi*x/8) or ddx_bound/|B0eff| is almost constant or declining as x is varied.

rudolph-git-acc commented 6 years ago

@km-git-acc

Yes, good catch, that might indeed be a way to establish the derivative bound for the interval.

Found another, slightly concerning, issue for very small x. It seems that when x < 13 the integral rapidly diverges and I can't control it by changing the parameters n0 or X. It also doesn't seem to be caused by the tail terms in that range; it just seems to be the integral itself that 'explodes' in that domain (try f.i. x=1). Do you see the same behaviour?

Using the original integral for H_t, I have started to compute H'_t for x=0 to 300 in steps of 0.1 at a high accuracy of 40 digits (that I certainly have to reduce for higher x). The first batch for 0 to 100 is done (see attached). We could also try to establish a derivative bound from it and combined with the actual values of Ht then run a (non-adaptive) mesh. Based on this first mesh exercise we could then divide up the remaining calculations of H'_t and/or try to get help from user Anonymous for the higher range.

Ht-der values y=t=0.4 for x= 0 to 100.txt

km-git-acc commented 6 years ago

@rudolph-git-acc Great. I get the exploding behavior near x=10, although the integral (based on the I_t_th functin, not the original integral) and A+B-C start diverging around x=13

I seem to have found an unchanging bound using a formula on the wiki page I had overlooked. Immediately after the line, 'From the triangle inequality as before we have'. It gives a J bound for the integral between limits 0 to X, and hence of the main estimate of J. (Till now I had been using the J bound only for the integral with limits X to inf (while computing the error in the exact H't.)

Attaching the updated code with a function ddx_Ht_bound and a helper function added at the top Jbound.. code_with_ddx_bound.txt

the Jbound function calculates |J| without the exp(-thetax) factor. Near x=20, I get the main est as 0.008371742821792787398709813917896330759048982733410446192511909215475718306135946363212467907261470100 It is very insensitive to n0 and X. As x increases, this quantity decreases rapidly. Since we are finally interested in |H't|/|B0| I have calculated main_estexp(-th*x)/abs(B0_eff). As x is varied , this quantity again decreases rapidly. At x =20, its value is around 6.85 which seems similar to the bounds obtained for much larger x during the AB exercise. At x=17 we have the value at around 33.

How would you interpret all this. Should we be using this derivative bound for x>=20 while calculating x_next in the adaptive mesh. With a fixed mesh size of 0.1 and assuming we want to keep c atleast 0.05, |Ht|/|B0| would have to stay above ~ 0.74 at the mesh points.

rudolph-git-acc commented 6 years ago

@km-git-acc

Nice job on the unchanging bound! Such a bound could indeed work for our purpose, however I am slightly nervous about the 'flaky' behaviour of this sum/integral. Not so much in the domain > 20 to 500, but very much so for 0 < x < 20 and 500 < x < 1500. I therefore believe we need to now first play back our current findings on the blog and get prof. Tao's steer. I had actually just composed a post and was bound to press 'comment' when I saw your note arrive. I basically like to share we have both got a good fit for Ht and H't however that there are also few initial observations on the data (precision needs to be high, tail estimates are low, X>9 induces an overflow and x < 12 make the integral diverge. I also plan to ask about the 0.1-step approach and the option to calculate H't for x=0..1600 as well.

Shall I just proceed and you can then add your most recent findings on the bound? I will hold off posting for the next 30 minutes (EU dinner time here :-) ) and await your response.

km-git-acc commented 6 years ago

@rudolph-git-acc Oh, sorry should have seen this (I just commented on the blog). I agree with the observations you mentioned and they should be asked on the blog.

On the 0<x<20 behavior..I am actually coming around to the view that all the exact integrals and approximations derived so far have their utility in some x range 0 to 20 -> original integral 15 to 1000 (maybe 2000) -> new integral 1000 to 40k-50k -> A+B-C 40k-50k to large x -> A+B (although with some luck it did work for as low as x ~ 1500) what do you think? I somehow think for the x<20 range, the original integral and its derivative are ideal and if we have the errors and derivatives analysis for A+B-C that would be awesome in the 1000 to 1500 range.

rudolph-git-acc commented 6 years ago

@km-git-acc

No worries. Will first read your post and build on it.

And yes, the approach of 'different formulae for different domains' makes a lot of sense to me. We could even create some deliberate overlaps across the cut-off points and then cross check the outcomes.

EDIT: have tried to post some additional comments, however seem to be blocked on the blog again. After I press 'comment' the post just disappears and nothing happens. Have tried it from different hardware and OS's but nothing helps. Will try again tomorrow...

km-git-acc commented 6 years ago

@rudolph-git-acc Maybe the problem is with your ISP. In any case maybe you should save your comments to avoid retyping.

I have corrected the ddxbound calculation. Attaching the updated code code_with_corrected_ddx_bound.txt It seems I was using an additional factor of u which is now removed. Also the cos(th) in the bound integral should ideally be cos(4*th)

this brings down the bound for |H't|/|B0| for x=20 to 3.57, but now it seems the bound increases as we let theta vary with x. If we set theta to the highest value by using arctan(9.4/1500), then the bound decreases with increasing x but it starts at a high value of 684 for x=20 and even at x=400 stays around 22. From practical calculation at mesh points, it didnt seem exact |H't|/|B0| is that high. I probably saw most values less than 1, and a few values close to 2. Please recheck things on your side.

EDIT: Also should our main sums use n=1 to n0-1 or n=1 to n0. I think its the former, as the tail formula shows n>=n0. Please confirm.

EDIT2: Well, n0 is an arbitrary number meeting certain conditions. So maybe it would be easier to have the main sums from 1 to n0 (which also allows passing n0=1 to the function, something not feasible right now), and then pass n0+1 to the tail factor.

km-git-acc commented 6 years ago

@rudolph-git-acc I can cover the range from 150-300 for a start. I was thinking, can we do the small x exercise with a progressive adaptive mesh in mind? I will assume c=0.05 and a derivative bound of 10 (which seems conservative enough), and start the exercise. If tomorrow it turns out the bound is lower, then no harm done, since we used a finer mesh. If the bound turns out to be larger, let's say by a factor of 2, we can also evaluate on a new mesh with the new mesh points as midpoints of the original consecutive mesh points, and then entangle both the meshes together (by sorting them together on the x values). Then as long as in that combined entangled mesh, xnext - x always is always closer than or equal to (|f|-c)/(|f'| bound) (very likely, and since everything is already computed the entire table can also be verified in seconds), we are good. The progressive approach will let us start the exercise formally without waiting for the exact derivative bound. Also, I think in order to maintain similarity with the previous exercises, we should compute f as Ht/B0. So the columns of the table can be x, c, ddx bound, precision, Ht, H't, ABC, newton quotient, |Ht/B0|, |H't/B0|, with the ABC and newton quotient acting as sanity checks since we are computing integrals. Please confirm.

rudolph-git-acc commented 6 years ago

@km-git-acc

I have now also computed the exact (40 digit) values for Ht' in steps of 0.1 for the range x=100.1...200 (attached). The data for range x=200.1 to 300 are underway. Will run some checks later to establish |H't|/|B0| at each step and calculate the max- and min-values for this entire range.

Really like the idea of running an adaptive 'test' mesh, but would use a smaller the range of say x=150..170 first and inspect the outcomes. Computation time is definitely going to be factor. Agree with the proposed fields for the progressive mesh to maintain similarity with the previous exercises. Excellent idea to also include 'precision', since this will be very relevant for the accuracy! Also good to include the sanity check values.

To cover off the domain 0 < x < 30, I will start computing the following fields in steps of 0.01: step, Ht, H't, |Ht|, |H't|, |Ht|/|B0|, |H't|/|B0|.

I would prefer to use n0 instead of n0-1 especially to allow using n0=1. In the end it doesn't really matter, although it could be confusing for someone who later would try to reproduce the results with say n0=6 where we'd actually be using n0=5.

And I still can't post anything on the blog, very frustrating...

Ht-der values y=t=0.4 for x= 100.1 to 200.txt

mariotrevi commented 6 years ago

@KM,

I'm interested in participating mildly energetically...

Below, I offer a few comments

On 03/14/2018 10:26 PM, KM wrote:

@rudolph-git-acc https://github.com/rudolph-git-acc Amazing. I was literally going mad figuring it out. The Ht and ABC estimates now match and indeed as x goes higher, higher precision allows the matching to continue.

I was able to match the values by keeping precision=x, so we may have some room there (although maybe we should keep it somewhat higher than x to be sure). But couldn't figure out a way to set the precision dynamically as the first line in a function. I tried the localprec procedure, but that didn't seem to work.

Neither do I know how to adjust the precision dynamically. It surely would be very useful.

There's a pari-gp users mailing list for questions, and I think it's pretty good:

pari-users archives at:

https://pari.math.u-bordeaux.fr/archives/

For intnum, the online help gives this:

?intnum intnum(X=a,b,expr,{tab}): numerical integration of expr from a to b with respect to X. Plus/minus infinity is coded as +oo/-oo. Finally tab is either omitted (let the program choose the integration step), a non-negative integer m (divide integration step by 2^m), or data precomputed with intnuminit.

So, intnum(X=a,b,expr, 2 ) , intnum(X=a,b,expr, 3 ), intnum(X=a,b,expr, 4 ), should give better and better approximations.

==

There's also intnumgauss:

?intnumgauss intnumgauss(X=a,b,expr,{tab}): numerical integration of expr from a to b, a compact interval, with respect to X using Gauss-Legendre quadrature. tab is either omitted (and will be recomputed) or precomputed with intnumgaussinit.

===

This ran in 13 minutes. x=1500 Ht = 0.7392035097990081081 + 0.6052210347079221874/I ABC = 0.7390201991188104438 + 0.6003548563936863411/I

10-13 minutes with the mesh of size 0.1 (for lets say x between 1000 and 1500) is a scary prospect. I agree Anonymous either has serious computing power at his/her disposal or has figured out a good optimized tool/technique. Has he/she shared it yet?

I haven't seen anything new by Anonymous.

To print to say 19 digits precision:

 ?precision precision(x,{n}): if n is present, return x at precision n. If n is omitted, return real precision of object x.

\p 200

? print(precision(Pi, 19)) 3.141592653589793239    // 19 digits instead of 200 are printed

I agree we can start with the mesh atleast for smaller values of x. How do you want to divide it? Although I think we have the derivative but not the derivative upper bound for H't yet, so we will have to wait for that to make our mesh exercise rigorous. Maybe we should post on the blog that we can compute Ht and Ht derivative correctly now, and then wait for Prof. Tao's reply.

I'm willing to do part of these computations.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/50#issuecomment-373239468, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_IfwdI0g8uu8Z9fY5Y1KqVyUC4Qbqbks5tedFZgaJpZM4Sdpm-.

mariotrevi commented 6 years ago

@KM,

On 03/14/2018 10:31 PM, KM wrote:

@rudolph-git-acc https://github.com/rudolph-git-acc Another thing I noticed is that keeping n0 fixed (or maybe even X fixed, but we cant have X larger than a certain value), as we increase x the error increases exponentially faster than the main estimate. So what starts out as a 10^-40 type error at small x soon becomes 10^-5 and then starts competing with the main estimate. We may have to keep n0 dynamic as k log(x) or k sqrt(x/4pi) as you mentioned earlier.

I'm wondering if letting the program choose the integration step is good enough.

One simple substitute would be to run:

intnum(X=a,b,expr,m)  for m = 2, 4, 6, 8,  10 and a fixed 'x': as m increases, it should approach the true integral.

Or have a m as a parameter,

(example)

MSTEP = 4; intnum(X=a,b,expr,MSTEP)

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/50#issuecomment-373240190, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If1Yt_QF-a13SNm3d-yqcE6mPdu1cks5tedJrgaJpZM4Sdpm-.

mariotrevi commented 6 years ago

@rudolph,

On 03/15/2018 08:23 AM, rudolph-git-acc wrote:

@km-git-acc https://github.com/km-git-acc

Yes, good catch, that might indeed be a way to establish the derivative bound for the interval.

Found another, slightly concerning, issue for very small x. It seems that when x < 13 the integral rapidly diverges and I can't control it by changing the parameters n0 or X. It also doesn't seem to be caused by the tail terms in that range; it just seems to be the integral itself that 'explodes' in that domain (try f.i. x=1). Do you see the same behaviour?

I  used KM's code, as corrected by you, and I changed how the intnum() function is used in the I_t_th function to calculate with an integration step parameterized by MSTEP, and this shows no blow-up for x =10: (if MSTEP >= 4)

The larger MSTEP is, the smaller the integration step is. These "second method" values for x = 10, y=0.4 should match the "first method" with the  Phi integral, although I haven't checked.

I_t_th = (t,th,X,b,beta)->return(intnum(u=Ith,Ith+X,exp(tu^2-betaexp(4u)+Ib*u),MSTEP)) // re-definition

commands and output

? MSTEP %38 = 4 [...]

Ht_integral(10)/B0_eff(10) is converging on about 0.7461310904162258301 - 0.19600392695476838422*I :

? x = 10 %44 = 10 ? print(Ht_integral(x)/B0_eff(x),"\n",abceff_x(x)/B0_eff(x)   ); 0.746131083580238873137466721044787182332 - 0.196003880436943671820976101948588577562I -0.752224216422166684004324680957991417637 + 0.231304290796091660017102193516138858556I

? MSTEP=MSTEP+1 %46 = 5 ? print(Ht_integral(x)/B0_eff(x),"\n",abceff_x(x)/B0_eff(x)   ); 0.746131090416225829938048589225025589227 - 0.196003926954768384169634466211881787281I -0.752224216422166684004324680957991417637 + 0.231304290796091660017102193516138858556I

? MSTEP=MSTEP+1 %48 = 6 ? print(Ht_integral(x)/B0_eff(x),"\n",abceff_x(x)/B0_eff(x)   ); 0.746131090416225830147981960474547020225 - 0.196003926954768384220324672393774511974I -0.752224216422166684004324680957991417637 + 0.231304290796091660017102193516138858556I

? MSTEP=MSTEP+1 %50 = 7 ? print(Ht_integral(x)/B0_eff(x),"\n",abceff_x(x)/B0_eff(x)   ); 0.746131090416225830147981960474547020225 - 0.196003926954768384220324672393774511974I -0.752224216422166684004324680957991417637 + 0.231304290796091660017102193516138858556I

Using the original integral for H_t, I have started to compute H'_t for x=0 to 300 in steps of 0.1 at a high accuracy of 40 digits (that I certainly have to reduce for higher x). The first batch for 0 to 100 is done (see attached). We could also try to establish a derivative bound from it and combined with the actual values of Ht then run a (non-adaptive) mesh. Based on this first mesh exercise we could then divide up the remaining calculations of H'_t and/or try to get help from user Anonymous for the higher range.

Ht-der values y=t=0.4 for x= 0 to 100.txt https://github.com/km-git-acc/dbn_upper_bound/files/1815316/Ht-der.values.y.t.0.4.for.x.0.to.100.txt

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/50#issuecomment-373358194, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If18qaEH7VYRv_6G8FD0MCZMcLptmks5tel1ZgaJpZM4Sdpm-.

mariotrevi commented 6 years ago

@rudolph,

On 03/15/2018 08:23 AM, rudolph-git-acc wrote:

@km-git-acc https://github.com/km-git-acc

Yes, good catch, that might indeed be a way to establish the derivative bound for the interval.

Found another, slightly concerning, issue for very small x. It seems that when x < 13 the integral rapidly diverges and I can't control it by changing the parameters n0 or X. It also doesn't seem to be caused by the tail terms in that range; it just seems to be the integral itself that 'explodes' in that domain (try f.i. x=1). Do you see the same behaviour?

Following up on my previous message, I get blow-up with MSTEP=2:

? MSTEP=3 %56 = 3 ? print(Ht_integral(x)/B0_eff(x)   ); 0.737893431884323774635387442312403209761 - 0.233787376557601501204675066945966377289*I  // ok

? MSTEP=2 %58 = 2 ? print(Ht_integral(x)/B0_eff(x)   ); 96.9450048548932784691608589558729738855 - 4.23881881171803129143343737274422479570*I  // blow-up

using

I_t_th = (t,th,X,b,beta)->return(intnum(u=Ith,Ith+X,exp(tu^2-betaexp(4u)+Ib*u),MSTEP))

? ?intnum intnum(X=a,b,expr,{tab}): numerical integration of expr from a to b with respect to X. Plus/minus infinity is coded as +oo/-oo. Finally tab is either omitted (let the program choose the integration step), a non-negative integer m (divide integration step by 2^m), or data precomputed with intnuminit.

Using the original integral for H_t, I have started to compute H'_t for x=0 to 300 in steps of 0.1 at a high accuracy of 40 digits (that I certainly have to reduce for higher x). The first batch for 0 to 100 is done (see attached). We could also try to establish a derivative bound from it and combined with the actual values of Ht then run a (non-adaptive) mesh. Based on this first mesh exercise we could then divide up the remaining calculations of H'_t and/or try to get help from user Anonymous for the higher range.

Ht-der values y=t=0.4 for x= 0 to 100.txt https://github.com/km-git-acc/dbn_upper_bound/files/1815316/Ht-der.values.y.t.0.4.for.x.0.to.100.txt

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/50#issuecomment-373358194, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If18qaEH7VYRv_6G8FD0MCZMcLptmks5tel1ZgaJpZM4Sdpm-.

rudolph-git-acc commented 6 years ago

@km-git-acc

Have calculated the min and max values for Ht'(x+0.4*i) from the 'step 0.1' files that I produced:

Range x=0..100: "minimum H't/B0=", 0.004024627219059094, "maximum H't/B0=", 1.526857445431632

Range x=100.1..200: "minimum H't/B0=", 0.18589968937898158, "maximum H't/B0=", 1.9764358645481794

It seems to steadily increase a bit for higher x-ranges, however is far below a derivative bound of 10 (especially for x < 300).

rudolph-git-acc commented 6 years ago

@mariotrevi

Using the MSTEP could indeed help to bring us in the domain below x < 12. For x=10, I do get the following numbers from the original Phi-integral (normalised by Bo_eff) and the match seems pretty good.

Ht(10+0.4i) = 0.7461310904162258301 - 0.19600392695476838422I Ht'(10+0.4i) =-0.09190857751059191652 + 0.019879877777093414363I

mariotrevi commented 6 years ago

@rudolph

On 03/16/2018 11:26 AM, rudolph-git-acc wrote:

@mariotrevi https://github.com/mariotrevi

Using the MSTEP could indeed help to bring us in the domain below x <

  1. For x=10, I do get the following numbers from the original Phi-integral (normalised by Bo_eff) and the match seems pretty good.

Ht(10+0.4/i) = 0.7461310904162258301 - 0.19600392695476838422/I Ht'(10+0.4/i) =-0.09190857751059191652 + 0.019879877777093414363/I

Thanks.  I also got a good match at 10+0.4*i for Ht using the Phi-integral.

Below, I did a Newton quotient using the second method, and I can confirm the Ht' value you found at 10+ 0.4*i :

? ((Ht_integral(x+eps)-Ht_integral(x-eps))/(2eps))/B0_eff(x) %74 = -0.091908577510591563138593 + 0.019879877777093322952546I // Ht'

? eps %75 = 1.00000000000000000000000000000000000000 E-6

? MSTEP %76 = 5

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/50#issuecomment-373748648, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If3qzj85M97UTYVeHd8954OtmNq-9ks5te9mzgaJpZM4Sdpm-.

mariotrevi commented 6 years ago

@KM,

On 03/14/2018 10:26 PM, KM wrote:

@rudolph-git-acc https://github.com/rudolph-git-acc Amazing. I was literally going mad figuring it out. The Ht and ABC estimates now match and indeed as x goes higher, higher precision allows the matching to continue.

I was able to match the values by keeping precision=x, so we may have some room there (although maybe we should keep it somewhat higher than x to be sure). But couldn't figure out a way to set the precision dynamically as the first line in a function. I tried the localprec procedure, but that didn't seem to work.

I was going to suggest the localprec function .

Anyway, what I did within a loop is to set the local precision to "x" digits for evaluating H_t at x + 0.4*i using either integral method.

I believe a precision which allows an accurate H_t(z) is about right,

so if | H_t(z)| ~= 1e-150, then real precision 170 should do.

First I set the local precision, then I refdefined y = 0.4 t = 0.4 just after to get the right precision.

This was the command:

 for(K=5,30,localprec(10K); t=0.4; y = 0.4; x = 10K; print(precision(x,19)," ", precision(Ht_integral(x)/B0_eff(x),19 )) )

This prints 19 digits: 50 0.21014894549438770750 - 0.17879863466267566077I  // z = 50 + 0.4i 60 0.14751406924215521680 + 0.3398377030715557399I 70 2.337738353422018984 - 0.5852869688391501389I 80 0.6510035138620221653 + 0.7152803447758252575I 90 2.539307497141443560 - 1.0047105807014429204I 100 0.23759173431407299100 - 0.4311516096271101686I 110 1.9256688934848197090 + 0.8937806906258980880I 120 0.6224461637789521688 - 0.14320584054933855441I 130 0.4010827396200248069 + 0.09376711655632614843I 140 0.7192801412430639475 - 0.7643375693750366716I 150 0.14230078432849312749 + 0.8669885871093284534I 160 0.9376107281280964011 - 1.5017869213172237742I 170 0.6006094290710028237 - 0.4651050001250624312I 180 2.130705173962304427 - 1.7138254605435377815I 190 0.5499955618208545708 - 0.12730700416779207226I 200 2.001281076274248192 + 0.3766782053888498946I 210 0.4870136647454158202 + 0.21680359421082294615I 220 0.2602877166102021907 + 1.6020845767623526784I 230 1.1832373886668575977 - 0.4717935649106269161I 240 1.7409938779532791573 + 0.9573326853853864040I 250 0.7700776491587720301 - 1.7667678805904265232I 260 0.8740743714523785784 - 0.04915042858699264014I 270 0.24895295449566337746 - 0.8437715543594463923I 280 0.5960757688619319587 - 0.16982783329221786885I 290 1.5266595744467224363 + 1.4539870042278861062I 300 0.4137741510271172849 - 0.025630817935178837108I

I have yet to check this with other methods ...

Also, I set

 MSTEP = 2  ( this was mentioned in an earlier message and is useful for very small "x" values).

This ran in 13 minutes. x=1500 Ht = 0.7392035097990081081 + 0.6052210347079221874/I ABC = 0.7390201991188104438 + 0.6003548563936863411/I

10-13 minutes with the mesh of size 0.1 (for lets say x between 1000 and 1500) is a scary prospect. I agree Anonymous either has serious computing power at his/her disposal or has figured out a good optimized tool/technique. Has he/she shared it yet?

I agree we can start with the mesh atleast for smaller values of x. How do you want to divide it? Although I think we have the derivative but not the derivative upper bound for H't yet, so we will have to wait for that to make our mesh exercise rigorous. Maybe we should post on the blog that we can compute Ht and Ht derivative correctly now, and then wait for Prof. Tao's reply.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/50#issuecomment-373239468, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_IfwdI0g8uu8Z9fY5Y1KqVyUC4Qbqbks5tedFZgaJpZM4Sdpm-.

km-git-acc commented 6 years ago

@rudolph-git-acc @mariotrevi Great. I am happy to choose any x range. Should i work on the range 500 to 600 for a start? The Mstep parameter would be really cool. Will try it out as well.

I have been thinking a lot about the derivative upper bound and here is my refined analysis. Do let me know your thoughts on this. Going to ask about it on the blog as well. Firstly attaching my updated script, where I have turned the long alpha factors into functions, and now using n0 and n0+1 within the functions to allow passing of n0=1 (please ignore the somewhat chaotic adhoc analysis below the functions which i will explain separately). code_with_corrected_ddx_bound_v2.txt

Firstly, while we should normalize the H_t and H'_t estimates by B0 for comparison and record purposes, it seems for the mesh calculation, f(x) should be simply H_t and not the normalized H_t. This is because the derivative bound max |f'x)| as derived here varies as (Some Constant) (exp(-theta_cx) and not as |B0(x)|, and exp(-theta_c*x) decays less rapidly than |B0(x)| (where theta_c is a fixed theta used to derive the bound), hence the earlier approaches of using the |d/dx (A+B)/B0| cannot be adapted here.

What is needed is a bound which is still more or less is equal to |Ht| so that the mesh gaps stay reasonable. If we use theta_c small (lets say the theta corresponding to x=20), then |f(x)|/bound |f'(x| decreases rapidly making the mesh approach unusable.

If we keep theta_c close to Pi/8, lets say by using the theta corresponding to x=1600, then bound |f'(x| = 5969.08045 exp(-0.386824 x) Now running |f(x)|/bound|f'(x)| at integer x points from 20 to 1600, I noticed the values lie between 0.16 to 0.0002. The minimum here does result in the adaptive mesh becoming a bit fine and increasing computation, but atleast its workable.

To establish a non-zero bound on H't, we also need a positive c term like earlier. Here the c term cannot be kept constant. Using c=c(x)=0.02 exp(-theta_cx) for example, we then get x_next = x + (|f(x)| - c(x))/bound|f'(x)| which results in x_next = x + (|f(x)| exp(+theta_c x) - 0.02)/5969.08045 and then if x_next always turns out to be greater than x as we run through the entire mesh, we would have cleared this x range!!

rudolph-git-acc commented 6 years ago

@km-git-acc @mariotrevi

For x <= 30, please find attached in steps of 0.01 and for y=t=0.4:

x, precision (=100), H_t, H'_t, |H_t|, |H'_t|, |H_t|/|B0|, |H'_t|/|B0|

https://drive.google.com/open?id=1iTeNgOfhgZT3G8K92xYUg39uiDu4UFT3

Could these help running a (non adaptive) mesh to clear this range?

mariotrevi commented 6 years ago

@rudolph,

On 03/16/2018 05:14 PM, rudolph-git-acc wrote:

@mariotrevi https://github.com/mariotrevi @mariotrevi https://github.com/mariotrevi

For x <= 30, please find attached in steps of 0.01 and for y=t=0.4:

x, precision (=100), H_t, H'_t, |H_t|, |H'_t|, |H_t|/|B0|, |H'_t|/|B0|

https://drive.google.com/open?id=1iTeNgOfhgZT3G8K92xYUg39uiDu4UFT3

Could these help running a (non adaptive) mesh to clear this range?

x = 30.00 data (copied from your link):

30.000000000000000000, 100, -0.00010001026469315639176 - 7.135701992146987373 E-6I,  // H_t -1.7241021240865117807 E-5 + 9.588637008486272443E-6I    // H'_t 0.00010026450661583058142,1.9728019999697758278E-5, 0.7692766880411027136,0.15136269452883307945

Ok, I understand what each number stands for.

I haven't followed closely the discussion about the mesh regarding | H_t(x + 0.4*i) | > 0 over some interval of x, such as x in [0, 30].

KM posted a question on Prof. Tao's blog.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/50#issuecomment-373846942, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If7muBF_ZaddoqtlZX5pcVfa9fVdqks5tfCtOgaJpZM4Sdpm-.

mariotrevi commented 6 years ago

@KM,

On 03/16/2018 02:25 PM, KM wrote:

@Rudolph https://github.com/rudolph @mariotrevi https://github.com/mariotrevi Great. I am happy to choose any x range. Should i work on the range 500 to 600 for a start? The Mstep parameter would be really cool. Will try it out as well.

I did some computations of H_t for a few x up to x = 1000, and compared that with Anonymous' data on values of H_t

With MSTEP = 2, as I recall, the integration step is still too big if x=1000... However, MSTEP = 4 is enough or more than enoough even for x=1000.

This is a sample from the 500-600 range, normalized H_t only:

? for(K=50,60,localprec(2K); t=0.4; y = 0.4; x = 10K; print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 ))  )

500 0.7613966014033060406 - 0.5202695620559752686I 510 0.4203474158208535092 + 0.3305235115505475947I 520 0.4087578588878893012 - 0.3330320417356983150I 530 0.2615619971004342557 + 0.7235301248294833303I 540 0.6720731298129382929 - 0.4135001044413055805I 550 0.2818593290663517349 + 0.6917623554137242819I 560 1.5716716091890476317 - 2.316578454409734091I 570 0.6599424684845458579 - 0.4336275175377356356I 580 1.5021964933877543254 - 1.2285826477208225130I 590 0.3765978597330235199 - 0.18158481449565019712I 600 1.1726223999138327666 - 0.22259808778270563911*I

? MSTEP %105 = 4

So for x = 500, I set the precision to 100 digits, and for x=600, it's set to 120 digits.

For x = 500, do you get something similar?

I have been thinking a lot about the derivative upper bound and here is my refined analysis. Do let me know your thoughts on this. Going to ask about it on the blog as well. Firstly attaching my updated script, where I have turned the long alpha factors into functions, and now using n0 and n0+1 within the functions to allow passing of n0=1 (please ignore the somewhat chaotic adhoc analysis below the functions which i will explain separately). code_with_corrected_ddx_bound_v2.txt https://github.com/km-git-acc/dbn_upper_bound/files/1820299/code_with_corrected_ddx_bound_v2.txt

Firstly, while we should normalize the H_t and H'_t estimates by B0 for comparison and record purposes, it seems for the mesh calculation, f(x) should be simply H_t and not the normalized H_t. This is because the derivative bound max |f'x)| as derived here varies as (Some Constant) * (exp(-theta_c/x) and not as |B0(x)|, and exp(-theta_c/x) decays less rapidly than |B0(x)| (where theta_c is a fixed theta used to derive the bound), hence the earlier approaches of using the |d/dx (A+B)/B0| cannot be adapted here.

What is needed is a bound which is still more or less is equal to |Ht| so that the mesh gaps stay reasonable. If we use theta_c small (lets say the theta corresponding to x=20), then |f(x)|/bound |f'(x| decreases rapidly making the mesh approach unusable.

If we keep theta_c close to Pi/8, lets say by using the theta corresponding to x=1600, then bound |f'(x| = 5969.08045 exp(-0.386824 x) Now running |f(x)|/bound|f'(x)| at integer x points from 20 to 1600, I noticed the values lie between 0.16 to 0.0002. The minimum here does result in the adaptive mesh becoming a bit fine and increasing computation, but atleast its workable.

To establish a non-zero bound on H't, we also need a positive c term like earlier. Here the c term cannot be kept constant. Using c=c(x)=0.02 exp(-theta_cx) for example, we then get x_next = x + (|f(x)| - c(x))/bound|f'(x)| which results in x_next = x + (|f(x)| exp(+theta_c x) - 0.02)/5969.08045 and then if x_next always turns out to be greater than x as we run through the entire mesh, we would have cleared this x range!!

I would need to read up on the adaptive mesh method...

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/50#issuecomment-373803729, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_IfzL1L7ZQJZ2cgJQQmgk2JHyyxKUeks5tfAOggaJpZM4Sdpm-.

km-git-acc commented 6 years ago

@mariotrevi I had to use localprec as 8K instead of 2K to get the results. Also, Mstep=4 turned out to be good for x=1500

At the higher x values, the Mstep approach at lets say a lower precision of 100 is much faster than the localprec approach.

Also attaching my updated script, where alpha is now updated using cos(4*theta), mstep (with default value of 4) is now used for the main I and J integrals, and some additions regarding calculating ratios of derivative bounds to actual derivatives. code_with_corrected_ddx_bound_v3.txt

Currently the Ht_integral and ddx_Ht_integral procedures don't pass the mstep parameter to I and J, which makes it slower for the lower x values than earlier. Maybe we should determine the x ranges where each mstep would work and then those conditions can be included in these procedures.

These are some of the results I got

\p 400 for(K=50,60,t=0.4; y = 0.4; x = 10*K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) )

500 0.7545230751682028132 - 0.5146441227951715097I 510 0.4246675197430268777 + 0.3242890359823127016I 520 0.4037749135008596526 - 0.3193726982481057131I 530 0.2692326546171177206 + 0.7041709316597306822I 540 0.6651177777481547751 - 0.3957008091605991755I 550 0.2880438485887015646 + 0.6673787762636115700I

\p 100

for(K=50,60,localprec(8K); t=0.4; y = 0.4; x = 10K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 500 0.7545230751682028132 - 0.5146441227951715097I 510 0.4249817193371555119 + 0.3219469696770325600I 520 0.4018663989464643979 - 0.3185796053502957409I 530 0.2667257279701035377 + 0.7094070397324174309I 540 0.6661206846529105853 - 0.3913001247788160298I 550 0.2915933021554179825 + 0.6619166434619913016I

\with mstep 4 for(K=50,60,t=0.4; y = 0.4; x = 10K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 500 0.7557716830583778739 - 0.5144948694609814970I 510 0.4260937961868180163 + 0.3218075755621578502I 520 0.4029876198506685625 - 0.3208541926325566163I 530 0.2675112722243099785 + 0.7072394511863026260I 540 0.6653914210583830134 - 0.3922710806729424378I 550 0.2904355290808715272 + 0.6646115343022537067*I

\with mstep 2 for(K=50,60,t=0.4; y = 0.4; x = 10K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 500 0.7555182057549715865 - 0.5145737939716168822I 510 0.4257454922161280761 + 0.3224617914274735817I 520 0.4033120429266282786 - 0.3207569180199322049I 530 0.2678162986979001444 + 0.7062456389605822766I 540 0.6649508787683588772 - 0.3923977599563268536I 550 0.2900811311587968533 + 0.6659863638912332033*I

\with mstep 1 for(K=50,60,t=0.4; y = 0.4; x = 10K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 500 141.09368112265563007 - 127.94665889361561129I 510 -134.15409773641402205 + 130.20863953724203051I 520 53.70279757029015949 - 21.224697631722950792I 530 -6.001615481798619075 + 25.091946656714213812I 540 21.227828955752123444 - 132.18903967920004570I 550 10.191389987004640341 + 7.643716682177367416*I

\with mstep 2 for(K=100,120,t=0.4; y = 0.4; x = 10K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 1000 33.77236151244766241 - 7.164215294024936367I 1010 2.879368347394518678 + 24.989483013569977776I 1020 -4.910171559725293325 - 16.887216442511574832I 1030 42.29002387521502087 + 15.043086611138036537I 1040 -25.973990043578560214 + 52.27834339614610769I 1050 -48.47630136006279973 - 29.905910208019028636*I

\with mstep 4 for(K=100,110,t=0.4; y = 0.4; x = 10K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 1000 0.3263888214436654353 + 0.5973843741963277348I 1010 0.6609567421729533830 + 0.09512584965359692745I 1020 0.22170675274942623948 + 0.6576320609817269573I 1030 1.8014274843264624562 + 0.4029856392768286459I 1040 0.5980716304262927347 + 0.012441818990670232904I 1050 1.1075710841523780638 - 0.06962595829527953873*I

\with mstep 4 for(K=150,160,t=0.4; y = 0.4; x = 10K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 1500 0.6870567593077897409 + 0.030746273309480338989I 1510 1.0380723741728830842 + 0.8165869481104744639I 1520 1.7673221304045042836 - 0.16096199410803991619I 1530 1.5301101301845840179 - 0.3508725288354311749I 1540 1.5426782809869228666 - 1.0237910239588860407I 1550 0.3634675952407079535 - 0.23368972627429303216*I

\using localprec (mstep 1) for(K=150,160,t=0.4; y = 0.4; localprec(8K); x = 10K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) exited this one after a few minutes

km-git-acc commented 6 years ago

@rudolph-git-acc @mariotrevi

I think a step size of 0.01 would certainly tackle a large part of the x range, but rigorously clearing an x range would require checking whether this mesh gap is smaller than what an adaptive mesh using the derivative bound would have mandated.

Prof. Tao had replied on the tweaked mesh approach I had asked about yesterday. It seems we can use that approach. Maybe multiple theta_c and D would work for the mesh, but I noticed running some exercises that too low or too high a theta_c causes problems. Low theta c results in |Ht| being much smaller than Dexp(-thcx) resulting in rapidly reducing mesh gaps, as x is increased, making the approach unusable. If we use theta_c very close to Pi/8, by using a large x_c, then D increases rapidly. In this case, the mesh gaps stay stable, but the large D causes the gap to be small resulting in a need for a massive computation.

Somewhere between x_c = 1000 to 2000 the mesh gap size seems reasonable if sometimes somewhat small.

x_c -> theta_c -> D 50 -> 0.206868 -> 0.188354 500 -> 0.373901 -> 222.8227585 1000 -> 0.383299 -> 1592.93663 1600 -> 0.386824 -> 5969.08045 2000 -> 0.387999 -> 11146.693733 5000 -> 0.390819 -> 143061.4963934

Similarly, I checked the ratio of derivative bound to exact derivative of Ht and found that it fluctuates a lot, and is dependent on the theta_c we use.

theta_c D min ratio max ratio stdev ratio
Pi/8 - atan(9.4/50) 0.188354 2.1 2.24E+18 2.32E+17
Pi/8 - atan(9.4/500) 222.8228 5.0 7.49E+10 6.41E+09
Pi/8 - atan(9.4/1000) 1592.937 7.2 181906 19500.1
Pi/8 - atan(9.4/1600) 5969.08 9.0 6377 371.4
Pi/8 - atan(9.4/2000) 11146.69 10.3 11484 385.6
Pi/8 - atan(9.4/5000) 143061.5 15.7 135087 4505.5

which as he mentioned we can then use as x_(i+1) = x_i + |H_t(x)|/(D exp(-theta_c * x)) (while still recording |H't|, |Ht/B0| and |H't/B0| for any future comparisons)

I had yesterday used x_c = 1600, resulting in theta_c = 0.386824 and D = 5969.08045. Should we divide the x range among ourselves in approx batches of 500. I guess Rudolph has already been running the lower x range. I am planning to now start with the range 1000 to 1600.

EDIT: Also at very low x values, there is the exploding behavior of the new integral. I think since both the old and new integrals are meant to give the exact values of H_t, the derivative bound used can be the same (even though it has been derived using the new integral), while the numerators will be the old integral, which will allow us to handle the very small x values too with the adaptive mesh. But it would be better to ask about this on the blog.

mariotrevi commented 6 years ago

@KM,

On 03/17/2018 04:56 AM, KM wrote:

@mariotrevi https://github.com/mariotrevi I had to use localprec as 8K instead of 2K to get the results. Also, Mstep=4 turned out to be good for x=1500

At the higher x values, the Mstep approach at lets say a lower precision of 100 is much faster than the localprec approach.

We don't have to use the localprec procedure, we can set say \p 100 for a whole interval of x values.

However, we might have to choose the integral step for the program via "MSTEP" to get accurate results.

Also attaching my updated script, where alpha is now updated using cos(4*theta), mstep (with default value of 4) is now used for the main I and J integrals, and some additions regarding calculating ratios of derivative bounds to actual derivatives. code_with_corrected_ddx_bound_v3.txt https://github.com/km-git-acc/dbn_upper_bound/files/1821440/code_with_corrected_ddx_bound_v3.txt

Ok. These changes don't affect the normalized H_t values, right?

Currently the Ht_integral and ddx_Ht_integral procedures don't pass the mstep parameter to I and J, which makes it slower for the lower x values than earlier. Maybe we should determine the x ranges where each mstep would work and then those conditions can be included in these procedures.

I think we'll have to tell intnum(.) the right integral step to use, for each kind of integral, perhaps depending on the value of x ...

These are some of the results I got

\p 400 for(K=50,60,t=0.4; y = 0.4; x = 10*K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) )

500 0.7545230751682028132 - 0.5146441227951715097/I 510 0.4246675197430268777 + 0.3242890359823127016/I 520 0.4037749135008596526 - 0.3193726982481057131/I 530 0.2692326546171177206 + 0.7041709316597306822/I 540 0.6651177777481547751 - 0.3957008091605991755/I 550 0.2880438485887015646 + 0.6673787762636115700/I

\p 100

for(K=50,60,localprec(8/K); t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 500 0.7545230751682028132 - 0.5146441227951715097/I 510 0.4249817193371555119 + 0.3219469696770325600/I 520 0.4018663989464643979 - 0.3185796053502957409/I 530 0.2667257279701035377 + 0.7094070397324174309/I 540 0.6661206846529105853 - 0.3913001247788160298/I 550 0.2915933021554179825 + 0.6619166434619913016/I

\with mstep 4 for(K=50,60,t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 500 0.7557716830583778739 - 0.5144948694609814970/I 510 0.4260937961868180163 + 0.3218075755621578502/I 520 0.4029876198506685625 - 0.3208541926325566163/I 530 0.2675112722243099785 + 0.7072394511863026260/I 540 0.6653914210583830134 - 0.3922710806729424378/I 550 0.2904355290808715272 + 0.6646115343022537067*I

\with mstep 2 for(K=50,60,t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 500 0.7555182057549715865 - 0.5145737939716168822/I 510 0.4257454922161280761 + 0.3224617914274735817/I 520 0.4033120429266282786 - 0.3207569180199322049/I 530 0.2678162986979001444 + 0.7062456389605822766/I 540 0.6649508787683588772 - 0.3923977599563268536/I 550 0.2900811311587968533 + 0.6659863638912332033*I

\with mstep 1 for(K=50,60,t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 500 141.09368112265563007 - 127.94665889361561129/I 510 -134.15409773641402205 + 130.20863953724203051/I 520 53.70279757029015949 - 21.224697631722950792/I 530 -6.001615481798619075 + 25.091946656714213812/I 540 21.227828955752123444 - 132.18903967920004570/I 550 10.191389987004640341 + 7.643716682177367416*I

\with mstep 2 for(K=100,120,t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 1000 33.77236151244766241 - 7.164215294024936367/I 1010 2.879368347394518678 + 24.989483013569977776/I 1020 -4.910171559725293325 - 16.887216442511574832/I 1030 42.29002387521502087 + 15.043086611138036537/I 1040 -25.973990043578560214 + 52.27834339614610769/I 1050 -48.47630136006279973 - 29.905910208019028636*I

\with mstep 4 for(K=100,110,t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 1000 0.3263888214436654353 + 0.5973843741963277348/I 1010 0.6609567421729533830 + 0.09512584965359692745/I 1020 0.22170675274942623948 + 0.6576320609817269573/I 1030 1.8014274843264624562 + 0.4029856392768286459/I 1040 0.5980716304262927347 + 0.012441818990670232904/I 1050 1.1075710841523780638 - 0.06962595829527953873*I

\with mstep 4 for(K=150,160,t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 1500 0.6870567593077897409 + 0.030746273309480338989/I 1510 1.0380723741728830842 + 0.8165869481104744639/I 1520 1.7673221304045042836 - 0.16096199410803991619/I 1530 1.5301101301845840179 - 0.3508725288354311749/I 1540 1.5426782809869228666 - 1.0237910239588860407/I 1550 0.3634675952407079535 - 0.23368972627429303216*I

\using localprec (mstep 1) for(K=150,160,t=0.4; y = 0.4; localprec(8/K); x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) exited this one after a few minutes

I think the order of t=0.4 , y = 0.4, localprec(8K) matters, so that to guarantee localprec(8K) applies to t and x, I'd write it:

for(K=150,160, localprec(8K); t=0.4; y = 0.4; x=10K; print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) )

Changing to 2*K:

? MSTEP %112 = 4

? for(K=150,160, localprec(2K); t=0.4; y = 0.4; x=10K; print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 1500 0.7388172168528984283 + 0.6009895581416429461I 1510 1.1390752685666308537 + 0.8628455252409303529I 1520 1.2070652057779767900 - 0.3061938435866594354I 1530 1.4193162936744403214 - 0.3013779140302894202I

? abceff_x(1500)/B0_eff(1500) %113 = 0.719136473154438364941237654017745810006 + 0.440276928949856420890146072767815781410*I

??? changing MSTEP to 5:

? MSTEP=5 %114 = 5 ? ? for(K=150,150, localprec(2K); t=0.4; y = 0.4; x=10K; print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 1500 0.7388172167059957973 + 0.6009895579886458133*I

Mhh.. I wonder what the precise value is for x=1500 .

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/50#issuecomment-373905135, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If38Wbp-2SsKjqzbPKXl83WcJqDBgks5tfM_MgaJpZM4Sdpm-.

mariotrevi commented 6 years ago

@KM,

On 03/17/2018 04:56 AM, KM wrote:

@mariotrevi https://github.com/mariotrevi I had to use localprec as 8K instead of 2K to get the results. Also, Mstep=4 turned out to be good for x=1500

At the higher x values, the Mstep approach at lets say a lower precision of 100 is much faster than the localprec approach.

Also attaching my updated script, where alpha is now updated using cos(4*theta), mstep (with default value of 4) is now used for the main I and J integrals, and some additions regarding calculating ratios of derivative bounds to actual derivatives. code_with_corrected_ddx_bound_v3.txt https://github.com/km-git-acc/dbn_upper_bound/files/1821440/code_with_corrected_ddx_bound_v3.txt

Currently the Ht_integral and ddx_Ht_integral procedures don't pass the mstep parameter to I and J, which makes it slower for the lower x values than earlier. Maybe we should determine the x ranges where each mstep would work and then those conditions can be included in these procedures.

These are some of the results I got

\p 400 for(K=50,60,t=0.4; y = 0.4; x = 10*K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) )

500 0.7545230751682028132 - 0.5146441227951715097/I 510 0.4246675197430268777 + 0.3242890359823127016/I 520 0.4037749135008596526 - 0.3193726982481057131/I 530 0.2692326546171177206 + 0.7041709316597306822/I 540 0.6651177777481547751 - 0.3957008091605991755/I 550 0.2880438485887015646 + 0.6673787762636115700/I

\p 100

for(K=50,60,localprec(8/K); t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 500 0.7545230751682028132 - 0.5146441227951715097/I 510 0.4249817193371555119 + 0.3219469696770325600/I 520 0.4018663989464643979 - 0.3185796053502957409/I 530 0.2667257279701035377 + 0.7094070397324174309/I 540 0.6661206846529105853 - 0.3913001247788160298/I 550 0.2915933021554179825 + 0.6619166434619913016/I

\with mstep 4 for(K=50,60,t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 500 0.7557716830583778739 - 0.5144948694609814970/I 510 0.4260937961868180163 + 0.3218075755621578502/I 520 0.4029876198506685625 - 0.3208541926325566163/I 530 0.2675112722243099785 + 0.7072394511863026260/I 540 0.6653914210583830134 - 0.3922710806729424378/I 550 0.2904355290808715272 + 0.6646115343022537067*I

\with mstep 2 for(K=50,60,t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 500 0.7555182057549715865 - 0.5145737939716168822/I 510 0.4257454922161280761 + 0.3224617914274735817/I 520 0.4033120429266282786 - 0.3207569180199322049/I 530 0.2678162986979001444 + 0.7062456389605822766/I 540 0.6649508787683588772 - 0.3923977599563268536/I 550 0.2900811311587968533 + 0.6659863638912332033*I

\with mstep 1 for(K=50,60,t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 500 141.09368112265563007 - 127.94665889361561129/I 510 -134.15409773641402205 + 130.20863953724203051/I 520 53.70279757029015949 - 21.224697631722950792/I 530 -6.001615481798619075 + 25.091946656714213812/I 540 21.227828955752123444 - 132.18903967920004570/I 550 10.191389987004640341 + 7.643716682177367416*I

\with mstep 2 for(K=100,120,t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 1000 33.77236151244766241 - 7.164215294024936367/I 1010 2.879368347394518678 + 24.989483013569977776/I 1020 -4.910171559725293325 - 16.887216442511574832/I 1030 42.29002387521502087 + 15.043086611138036537/I 1040 -25.973990043578560214 + 52.27834339614610769/I 1050 -48.47630136006279973 - 29.905910208019028636*I

\with mstep 4 for(K=100,110,t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 1000 0.3263888214436654353 + 0.5973843741963277348/I 1010 0.6609567421729533830 + 0.09512584965359692745/I 1020 0.22170675274942623948 + 0.6576320609817269573/I 1030 1.8014274843264624562 + 0.4029856392768286459/I 1040 0.5980716304262927347 + 0.012441818990670232904/I 1050 1.1075710841523780638 - 0.06962595829527953873*I

\with mstep 4 for(K=150,160,t=0.4; y = 0.4; x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) 1500 0.6870567593077897409 + 0.030746273309480338989/I 1510 1.0380723741728830842 + 0.8165869481104744639/I 1520 1.7673221304045042836 - 0.16096199410803991619/I 1530 1.5301101301845840179 - 0.3508725288354311749/I 1540 1.5426782809869228666 - 1.0237910239588860407/I 1550 0.3634675952407079535 - 0.23368972627429303216*I

Using your code as corrected by @Rudolph, but not the cos(4*th) corrections,

I get something different for x = 1500, based on the Phi integral:

(normalized, x = 1500, y = 0.4, t= 0.4):

1500  0.7388172167059957973 + 0.6009895579886458133*I

\using localprec (mstep 1) for(K=150,160,t=0.4; y = 0.4; localprec(8/K); x = 10/K;print(precision(x,9)," ", precision(Ht_integral(x)/B0_eff(x),9 )) ) exited this one after a few minutes

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/50#issuecomment-373905135, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If38Wbp-2SsKjqzbPKXl83WcJqDBgks5tfM_MgaJpZM4Sdpm-.

km-git-acc commented 6 years ago

@mariotrevi Indeed, when I had run it last time at x=1500 with a precision of 1500 (and mstep=1) I had got, Ht = 0.7392035097990081081 + 0.6052210347079221874I ABC = 0.7390201991188104438 + 0.6003548563936863411I

I also used Anonymous' link https://pastebin.com/k7vC6e7n and took the value for x=1500, divided by B0eff and got 0.7388172167059957974 + 0.6009895579886458133*I

Now I kept increasing the mstep parameter from 4 to 8 (precision 100), but the integral it is still stuck at the 0.68+0.03 I value. Somehow the mstep parameter is not able to replicate what \p or localprec does. Is it because the integral is adding and subtracting quantities smaller than 10^-100, and so no matter of additional points will help? I increased precision to 500 but still no change. Seems the precision has to be set close to x.

Is there a solution? I was really keen on mstep. Without that the computations will be way slower.

km-git-acc commented 6 years ago

@rudolph-git-acc I had mentioned earlier that while running large scale calculations I was going to compute the newton quotient of Ht (for a sanity check against H't). But now I think, atleast for the larger values of x its better to compute the NQ of A+B-C. The match is good with H't at larger values of x, and half of the integral calculations will be eliminated.