Closed rudolph-git-acc closed 5 years ago
Thanks for this - it's easy for me to step through it myself now. The multiplicative factor for 317 should have N^(-(1+I*xt)\/2) rather than N^((1+I*xt)\/2)\/8 (I guess the 1\/8 factor was just dropped at the very start, as it wasn't important), which seems to fix that step at least.
Similarly, the multiplicative factor for 318 should be N^(-(1+I*xt)\/2)*sqrt(Pi\/(1-Iu\/(8*Pi))) rather than N^((1+I*xt)/2)/8\sqrt(Pi) .
Now checking 3.20...
OK, it looks like even after fixing the multiplicative factor, the 3.20 approximation is not very accurate unless x and t are extremely large (of size 10^5 or so). However they do become reasonably good at that scale. What this could mean is that the wiki asymptotics are for a regime which is not the one seen in the numerics.
On the other hand, now that 3.18 remains reasonably accurate and significantly faster to calculate than the preceding expressions due to the lack of integration, perhaps it could be used to explore the zeroes at a larger range than previously done? e.g. for x,t of size around 10^5 or 10^6. I don't know right now whether the patterns we are seeing are transient; it could be that a different dynamics takes over at larger scales.
Many thanks for your help! I now indeed get the correct zeros for eq 3.18, however the equations beyond that remain unstable.
Do you know where the imaginary parts are coming from beyond 3.07?
Great idea to use the fast 3.18 to explore very large t, x
. Will try this out.
P.S.:
print("eq.328 ",Ht324(1000, -10))
should have been print("eq.328 ",Ht328(1000, -10))
of course.
I think the small imaginary parts are coming from the lower order terms in the Stirling approximation that we are omitting (they are of order about O(1/12s), which is consistent with the fact that the imaginary part is four orders of magnitude smaller than the real part from 3.07 onwards). It can be fixed by adding additional terms to definition of M_0, but given that we have many more serious accuracy issues, it probably isn't worth expending the effort to fix it at this stage.
The equation (3.18), by the way, is almost already entirely in (N,u) coordinates (replace xt by 4PiN^2) after discarding the multiplicative factor, so we should for instance now be able to fill in the missing portions of https://drive.google.com/file/d/101wtC7VyM9aHTst2F-X1Xf62FQT0m6e9/view relatively easily.
Have played quite a bit with 3.18. It is really fast and also appears to be sufficiently accurate as long as x / |t| > 10 (-ish)
. For other values it quickly drifts away from the correct value of Ht.
I see what you mean with the (N,u) coordinates, however once the multiplicative factor is removed, the zeros of 3.18 immediately become very 'noisy' again. I have seen a similar effect when I first tested the (unbounded) Ht-integral for larger negative t and noticed that a 'normaliser' (like e.g. multiplying by 10^166) yielded more stable results.
The Ht-zeros used for the graph you refer to were in the range t=-200...0, x=20..600
and we have to go higher to test the new equation. Therefore just kicked off a real root finding run for t=100..0, x=10000...10600
and will overlay this with the Ht data (did some test runs and expect this to all match).
There is a perfect fit between the zeros of Ht-actual and eq.3.18 (red + green turns into brownish):
Will now expand the data in the t-direction to see if the pattern is re-emerging, however expect divergence to start when t decreases < -800.
The perfect fit is very promising! It suggests that much of the numerical deviation between H_t and 3.18 is coming from the multiplier rather than the sum.
By the way, I think I know why your root finding algorithm creates spurious zeroes when the multiplier is removed - it's because the multiplier and sum are both complex numbers, with approximately opposite phase; only the product is (approximately) real. So any attempt to find zeroes based on detecting sign changes in the sum is going to create artificial zeroes when the phase is close to \pm \pi/2. On the other hand this means that one can remove any purely real components from the multiplier (in particular M0 can be replaced by its phase M0/abs(M0)), which will remove the exponentially small magnitude in these sums and perhaps lead to slightly more accurate numerics.
I'm curious to see exactly where the breakdown in accuracy from (3.18) to (3.20) is coming from. There are three substitutions being made here:
From my own primitive testing it seemed that the first change was fairly accurate but the second one was not for smallish values of t,x, but it should get more accurate for larger values of t,x. (Amusingly, replacing the first log(n/N) with the seemingly more accurate (n-N)/N - (n-N)^2/2N^2 gives far worse results due to the fact that the latter approximation has an additional zero leading to some new very large terms in the sum.)
The phase M0/abs(M0)) works great and no longer a need for any other multiplicative factor than that. Have also tried Newton-Raphson instead of sign-changed based root finding, however this worked only successfully for 3.18 with the new factor (still drift in 3.20).
The graph has been extended to t=-300 and still has a perfect fit with the Ht-actual zeros. At t=-1000 the drift becomes ~0.3 at each zero (i.e. Ht-actual is 0.3 higher than 3.18). This increases to ~0.7 at t=-1800 (here only a single 'straight line' is left to detect). Will generate the roots for x=10000...10600, t=-1800...-10 in steps of 10, so we have that ready when we break through the 3.20 Barrier :)
Rudolph, just to confirm, is this the eqn 3.18 function you are using post the changes?
Ht318_new(x,t) = {
xt = x-Pi*abs(t)/4;
N = sqrt(xt/(4*Pi));
u = 4*Pi*abs(t)/xt;
m0val = M0((1+I*xt)/2);
mulfact = (m0val/abs(m0val))*N^(-(1+I*xt)/2)*sqrt(Pi/(1-I*u/(8*Pi)));
A=mulfact*sum(n=1,1000,exp(-abs(t)*(log(n/N))^2/(4*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n/N)));
return(A);
}
Ht318_new(1000,-10) -> 1.55003 + 0.00051*I
Using the root finder at the end, the output is close to the one in the first post.
-10.00000000, 906.76541539
-10.00000000, 931.50799535
-10.00000000, 976.83695739
-10.00000000, 990.48752071
Yes, that's the one. You could simplify the mulfact
by only using M0/abs(M0)). Eq. 3.18 does deliver very well as long as x / |t| > 10; it is eqs. 3.20 onwards that are still unstable when establishing the zeros.
It seems mulfact still requires the N^() term. On removing it,
Ht318_new(x,t) = {
xt = x-Pi*abs(t)/4;
N = sqrt(xt/(4*Pi));
u = 4*Pi*abs(t)/xt;
m0val = M0((1+I*xt)/2);
mulfact = (m0val/abs(m0val));
A=mulfact*sum(n=1,1000,exp(-abs(t)*(log(n/N))^2/(4*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n/N)));
return(A);
}
Ht318_new(1000,-10) -> -2.54492 + 0.56468*I
the imaginary term becomes significant unlike in the previous version
The root finder also gives different and a lot more zeros,
-10.00000000, 902.09670707
-10.00000000, 904.48290536
-10.00000000, 909.25164595
-10.00000000, 911.63424081
-10.00000000, 914.01563999
..
..
Also while dealing with large x, it may be useful to change the number of terms in the sum dynamically (for eg sqrt(x) instead of 1000),
Ht318_new2(x,t) = {
xt = x-Pi*abs(t)/4;
N = sqrt(xt/(4*Pi));
u = 4*Pi*abs(t)/xt;
m0val = M0((1+I*xt)/2);
mulfact = (m0val/abs(m0val))*N^(-(1+I*xt)/2)*sqrt(Pi/(1-I*u/(8*Pi)));
A=mulfact*sum(n=1,floor(sqrt(x)),exp(-abs(t)*(log(n/N))^2/(4*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n/N)));
return(A);
}
for eg. check the behavior of the new vs new2 functions at x=10^7 or higher.
That is strange. I am currently running x=10000...11000, t=-1800...-10 in steps of 10, with the code below (at realprecision=40
) and don't have issues with the root finding. Note that I already use sqrt(x)
and count to 100 rather than 1000 (could that explain the difference?):
EDIT 1: Oops. You are correct. I use log(n) in the last part and therefore have implicitly done the N^() multiplication !
Ht318(x,t) = {
xt = x-Pi*abs(t)/4;
N = sqrt(xt/(4*Pi));
u = 4*Pi*abs(t)/xt;
mulfact = exp(-Pi^2*abs(t)/64)*(M0((1+I*xt)/2))*N^(-0*(1+I*xt)/2)*sqrt(Pi);
A=M0((1+I*xt)/2)/abs(M0((1+I*xt)/2))*sum(n=1,100,exp(-abs(t)*(log(n/N))^2/(4*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n)));
return(A);
}
EDIT 2: If I have done the math correctly, moving from 3.16 to 3.17 just gives log(n) as the last log in 3.17. The non-zero multiplicative N^() is then brought in for the later steps. However, when I skip this multiplication and just continue the last log(n) in 3.20, it does produce the correct zeros (this works all the way to 3.23):
Ht320(x,t) = {
xt = x-Pi*abs(t)/4;
N = sqrt(xt/(4*Pi));
u = 4*Pi*abs(t)/xt;
A = M0((1+I*xt)/2)/abs(M0((1+I*xt)/2))*sum(n=1,100,exp(-(abs(t)*(n-N)^2)/(4*N^2*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n)));
return(A);
}
Great. I was able to match the output with the above Ht318 function after removing the sqrt(Pi/(1-I*u/(8*Pi)))
factor. Had kept it since it is complex, but the imaginary part is small.
The Ht320 function is giving the correct zeros in the test range, although the imaginary part here is not insignificant which may require caution.
Ht320(1000,-10) -> 0.69424 - 0.20181*I
Here is a plot of both eq 3.18 and Ht-actual in the range: x=10000..15000, t=-3000...-10 (step 10). The brownish color indicates the surprisingly good fit (< 1 difference) even at t=-3000 and using n=1...100.
Normalized version:
Great. Let me know if you want me to evaluate a particular x range. Currently planning to check a x range at a much higher height, say near 10^8.
The evaluation of 3.18 is very fast, even in pari/gp (have not yet built it in ARB). To get some good visuals of the patterns at 10^8, you probably could pick a relatively wide x-range like 10^8...10^8 + 100000 since the t-value has to be substantially negative (I guess < -10000) to capture the lowest peaks and to clearly identify the straight lines.
Thanks for the plot! It does strongly look like the normalised zeroes are rapidly becoming periodic in the N variable with period 1 I'll have another crack at trying for an asymptotic expansion, maybe dropping fewer terms than what I initially tried on the wiki. I probably won't get a theta function any more but hopefully it will be something still computable.
Added later: actually, the following numerics would be helpful:
To what extent does the above picture change if one replaces the (1+I*xt)/2 factor in 3.18 with just I*xt/2? (I feel like this should be relatively negligible.)
After doing this, to what extent does the above picture change of one replaces the second factor of log(n/N) in 3.18 with (n-N)/N - (n-N)^2 / 2N^2? (This change should be slightly less negligible, but should become more accurate as N gets large, I think.
Finally, what happens when one replaces the first log(n/N) factor (the one being squared) with (n-N)/N? This is of course 3.20. The numerics I did suggest that it is this change which is causing most of the deviation, but I'd like to confirm this.
Here are four pictures with axes like in Rudolph's normalized plot above. These have only one evaluation per pixel so some roots might be missed, and only 1000 summation terms were taken per evaluation. I'm pretty sure the first picture is correct but the rest are relatively untested so they might have typos. The same multiplier was used for all four plots.
The first and second pictures are indeed almost identical, but both of the approximations of the log factors appear to introduce significant deviation in the third and fourth pictures.
3.18 with multiplier:
and (1+I*xt)/2 -> I*xt/2
:
and second log(n/N) -> (n-N)/N - (n-N)^2 / 2N^2
:
and first log(n/N) -> (n-N)/N
Wow, those last two approximations are significantly worse than I expected, with no significant sign of improvement as N goes to infinity. Will have to rethink how to make a usable approximation.
Could you try changing 1+I*xt to I*xt and then changing the second log(n/N) to (n-N)/N - (n-N)^2/2N^2 + (n-N)^3/3N^3? I guess what must be happening is that the convergence of the Taylor series for log(1+x) is a lot worse than I expected.
Using the third order expansion of the second log fixes both of the last two approximations.
3.18 with multiplier, and changing (1+I*xt)/2
to I*xt/2
and changing second log(n/N)
to (n-N)/N - (n-N)^2 / 2N^2 + (n-N)^3/3N^3
:
and changing first log(n/N)
to (n-N)/N
:
Ah! Now we're getting somewhere. Many thanks!
The next approximation I want to make (on top of all the previous ones) is to separate out the exp( I*xt/2 * (n-N)^3/3N^3 ) term and replace it with (1 + I*xt/2 * (n-N)^3/3N^3). That is to say, exactly the same expression as (3.20) except that the summand is multiplied by (1 + I*xt/2 * (n-N)^3/3N^3). Could you make a similar plot for this approximation also? In the meantime I will try to repeat the calculations from (3.20) onwards with this new correction term and see what comes out.
In this picture I've separated out the exp( -I*xt/2 * (n-N)^3/3N^3 )
term and replaced it with (1 - I*xt/2 * (n-N)^3/3N^3)
.
Here's the result of taking one more term of the exp approximation, separating out exp( -I*xt/2 * (n-N)^3/3N^3 )
and replacing it with (1 - I*xt/2 * (n-N)^3/3N^3 + (I*xt/2 * (n-N)^3/3N^3)^2/2)
:
Thanks for this! I guess the numerics are telling us that it's not a good idea to Taylor expand this particulr exponential factor. (It's interesting though that the 1-periodicity survives even in these lousy approximations, though. I'm also slightly curious to see how well each of these approximations maintain the real-valuedness of $H_t(x)$, though this is somewhat secondary.)
I'll start writing a revised version of the wiki computations in which I keep this exponential phase factor. The Theta function is now replaced by something a bit weirder - a series of Airy functions - but perhaps that's simply what the answer for what the asymptotic behaviour is.
UPDATE: I've got a first draft of this at
I've now kept the multiplier rather than discarded it as in the previous version (in particular there was a factor of 1/8 sqrt(Pi) that was missing in previous code). The calculation is still not at a final stage because I haven't yet figured out how best to evaluate the Airy type integral that I end up with, but perhaps first one should check that these formulae are consistent with each other and still producing essentially the same pattern of zeros that we are seeing above.
I looked up generalised Airy functions but https://dlmf.nist.gov/ is offline because the federal government of the USA is currently shut down.
Have tried the second version in pari/gp and at 1.20 the value then already deviates by a factor 3 (for x=1000, t=-10). The roots also become incorrect at 1.20. Have not gotten any good value for 1.24.
Could an issue be that the Taylor expansion for log(n/N)
(i.e. 1.19), is only valid for|-1 + n/N| < 1
whilst n becomes much larger than N ?
default(realprecision, 40);
Xi(s)=return((s/2)*(s-1)*Pi^(-s/2)*gamma(s/2)*zeta(s));
M0(s)=return(s*(s-1)/2*Pi^(-s/2)*sqrt(2*Pi)*exp((s/2-1/2)*(log(s/2))-s/2));
Ht11(x,t) = {
A=1/(8*sqrt(Pi))*intnum(v=-8,8,Xi((1+I*x)/2+I*sqrt(abs(t))*v)*exp(-v^2),1);
return(A);
}
Ht116(x,t) = {
xt = x-Pi*abs(t)/4;
N = sqrt(xt/(4*Pi));
u = 4*Pi*abs(t)/xt;
mulfact = exp(Pi^2*t/64)*(M0((1+I*xt)/2))*N^(-(1+I*xt)/2)/(8*sqrt(1-I*u/(8*Pi)));
A=mulfact*sum(n=1,1000,exp(-(abs(t)*log(n/N)^2)/(4*(1-I*u/(8*Pi)))-(1+I*xt)/2*log(n/N)));
return(A);
}
Ht118(x,t) = {
xt = x-Pi*abs(t)/4;
N = sqrt(xt/(4*Pi));
u = 4*Pi*abs(t)/xt;
mulfact = exp(Pi^2*t/64)*(M0((1+I*xt)/2))*N^(-(1+I*xt)/2)/(8*sqrt(1-I*u/(8*Pi)));
A=mulfact*sum(n=1,1000,exp(-u*(n-N)^2/(4*(1-I*u/(8*Pi)))-2*Pi*I*N^2*log(n/N)));
return(A);
}
Ht120(x,t) = {
xt = x-Pi*abs(t)/4;
N = sqrt(xt/(4*Pi));
u = 4*Pi*abs(t)/xt;
mulfact = exp(Pi^2*t/64)*(M0((1+I*xt)/2))*N^(-(1+I*xt)/2)/(8*sqrt(1-I*u/(8*Pi)));
A=mulfact*sum(n=1,1000,exp(-u*(n-N)^2/(4*(1-I*u/(8*Pi)))-2*Pi*I*N*(n-N)+Pi*I*(n-N)^2-2*Pi*I/(3*N)*(n-N)^3));
return(A);
}
Ht124(x,t) = {
xt = x-Pi*abs(t)/4;
N = sqrt(xt/(4*Pi));
u = 4*Pi*abs(t)/xt;
mulfact = exp(Pi^2*t/64)*(M0((1+I*xt)/2))*N^(-(1+I*xt)/2)/(8*sqrt(1-I*u/(8*Pi)));
A=mulfact*sum(n=1,1000,exp(-(2*Pi*I*(n-N)^2)/(1-I*u/(8*Pi))+Pi*I*n-2*Pi*I/(3*N)*(n-N)^3));
return(A);
}
print("Ht11 : ", Ht11(1000, -10));
print("Ht116: ", Ht116(1000, -10));
print("Ht118: ", Ht118(1000, -10));
print("Ht120: ", Ht120(1000, -10));
print("Ht124: ", Ht124(1000, -10));
xs = 900; xe = 1000;
ts = -10; te = -9;
f=Ht120;
forstep(t=ts, te, 2, {
xc = xs;
while(xc <= xe,
if(real(f(xc,t))*real(f(xc+1,t)) < 0, root=solve(a=xc, xc+1, real(f(a,t))); printf("%3.8f, %3.8f\n",t, root));
xc=xc+1);
})
Ht11 : 8.473323058767420988462095093867083351079 E-167 + 2.4784166626425366748 E-222*I
Ht116: 8.464553913428080983922546057446293276061 E-167 + 2.831092360284232422609199398263948176428 E-170*I
Ht118: 7.555897718962981045167509895563492320651 E-167 - 8.678666255397601416299821448091805557430 E-168*I
Ht120: 2.571676382003351323029996737756072336631 E-167 - 2.171715290532354466079839597858283138264 E-167*I
Ht124: -1.397262230331464853130873538396745824946 E13341 + 4.141701909039241843563469909225129565188 E13341*I
Hmm, this doesn't quite mesh with what p15 was reporting, but perhaps this has to do with a different range of x and t. In principle the regions where n is far from N should not be significant because the first term exp( -u(n-N)^2 / 4(1-iu/8*Pi) ) should make those terms small regardless of what is going on with the imaginary part of the exponent.
Do things get better for (1.20) if one adds in the next term in the Taylor expansion, namely -2pi i / (4*N^2) * (n-N)^4?
Something is also weird with the discrepancy between (1.20) and (1.24) because these two expressions should be identical. Could you also try evaluating (1.23) to try to pin this down? Thanks! EDIT: Ah, one problem at least is that there was a multiplier of exp( - pi i N^2 ) missing, which I have now restored, though this doesn't explain the enormous magnitude.
Yep, the fourth expansion term improves the result quite a bit and gets much closer to the 1.18 outcomes (4th term should be added, not subtracted since we multiply with -2i pi N^2): 7.246619487599294335775800802739228862158 E-167 + 5.245125732427255834072661040396376699364 E-169I
Code used:
Ht1201(x,t) = {
xt = x-Pi*abs(t)/4;
N = sqrt(xt/(4*Pi));
u = 4*Pi*abs(t)/xt;
mulfact = exp(Pi^2*t/64)*(M0((1+I*xt)/2))*N^(-(1+I*xt)/2)/(8*sqrt(1-I*u/(8*Pi)));
A=mulfact*sum(n=1,1000,exp(-u*(n-N)^2/(4*(1-I*u/(8*Pi)))-2*Pi*I*N*(n-N)+Pi*I*(n-N)^2-2*Pi*I/(3*N)*(n-N)^3+2*Pi*I/(4*N^2)*(n-N)^4));
return(A);
}
EDIT: 1.23 seems ok and 1.24 is in trouble (added the missing factor in both equations). Ht123: 3.096798052083977107893549476610419878650 E-167 - 1.318980114139239301648195923052131756021 E-167I Ht124: 1.997838196984421999860456640133950665489 E13341 - 3.887760149793219136215906868124042515915 E13341I
Code used:
Ht123(x,t) = {
xt = x-Pi*abs(t)/4;
N = sqrt(xt/(4*Pi));
u = 4*Pi*abs(t)/xt;
mulfact = exp(Pi^2*t/64)*(M0((1+I*xt)/2))*N^(-(1+I*xt)/2)/(8*sqrt(1-I*u/(8*Pi)))*exp(-Pi*I*N^2);
A=mulfact*sum(n=1,1000,exp(-(u*(n-N)^2)/(4*(1-I*u/(8*Pi)))-Pi*I*n^2+2*Pi*I*(n-N)^2-2*Pi*I/(3*N)*(n-N)^3));
return(A);
}
Ht124(x,t) = {
xt = x-Pi*abs(t)/4;
N = sqrt(xt/(4*Pi));
u = 4*Pi*abs(t)/xt;
mulfact = exp(Pi^2*t/64)*(M0((1+I*xt)/2))*N^(-(1+I*xt)/2)/(8*sqrt(1-I*u/(8*Pi)))*exp(-Pi*I*N^2);
A=mulfact*sum(n=1,1000,exp(-(2*Pi*I*(n-N)^2)/(1-I*u/(8*Pi))+Pi*I*n-2*Pi*I/(3*N)*(n-N)^3));
return(A);
}
In checking my numbers against the ones posted above I noticed that (6) in the writeup has a 1/8 coefficient that is missing in Rudolph's definition of M0.
I think that the minus sign in 1.24, just before the 2pi i, should be removed. This gives an exact match with 1.23.
When I remove the 1/8 factor from M0 in my code I reproduce the x=1000, t=-10 values above for 1.16, 1.18, and 1.20. The 1.20 plot still looks OK to me, so I'm not able to see that the roots become incorrect at 1.20.
Great! I am actually not sure wether the 1/8 factor in M0 should be there or not, now that we have it explicitly in the integral (1.1). In any case, it should not matter much for finding the zeros.
The issue with the zeros could be related to the root finding method (sign changes) chosen and/or the accuracy of pari/gp compared to ARB.
Thanks for pointing out the sign errors and superfluous factors of 8, I have corrected the code accordingly.
So if I can summarise: the formulae are reasonably accurate up to 1.18. 1.20 is a little bit off but may possibly still have the right behaviour of zeroes. 1.23 and 1.24 agree more or less exactly with 1.20. Adding a fourth order term makes 1.20 (and presumably also 1.23 and 1.24 close to the 1.18 numerics). Does this sound like an accurate description?
If one works with larger values of x and t, does the accuracy of 1.20 (without the fourth order term) improve? It's bugging me a bit that these lower order terms seem so significant, since asymptotically they ought to become negligible.
That summary is consistent with what I've seen, at least up to 1.20 including both the third and fourth order terms. I haven't looked beyond that.
Here's what I get for larger values of x and t, where 1.20.1 uses the fourth order approximation:
x: 10000
t: -100
1.16: 4.855049924e-1694 + i*1.631850012e-1698
1.17: 4.86363506e-1694 + i*1.10709574e-1695
1.18: 4.795716218e-1694 - i*1.930340726e-1695
1.20: 3.374988635e-1694 - i*1.185270831e-1694
1.20.1: 4.30139383e-1694 - i*4.165799177e-1695
Your summary looks correct to me. Here are some numbers I got. It is quite a 'mixed bag' with sometimes even incorrect signs. Do get an exact match with P15's numbers on t=-100. Curious what P15 will get for 1.23 & 1.24 (I used the 3rd term only).
EDIT: had sign error in the multiplicative factor for 1.23 and 1.24. Now fixed.
x=1,000, t=-10 (n=1000)
Ht11 : 8.4733230587674209885 E-167 + 1.2519820674867721204 E-203*I
Ht116: 8.4645539134280809839 E-167 + 2.8310923602842324226 E-170*I
Ht118: 7.5558977189629810452 E-167 - 8.6786662553976014163 E-168*I
Ht120: 2.5716763820033513230 E-167 - 2.1717152905323544661 E-167*I
Ht120: 7.2466194875992943358 E-167 + 5.2451257324272558341 E-169*I + 4th term
Ht123: 2.5716763820033513230 E-167 - 2.1717152905323544661 E-167*I
Ht124: 2.5716763820033513230 E-167 - 2.1717152905323544661 E-167*I
x=10,000, t=-100 (n=10000)
Ht11 : 4.8589429343605458574 E-1694 + 3.0493883590774307505 E-1729*I
Ht116: 4.8550499235295670320 E-1694 + 1.6318500115797969898 E-1698*I
Ht118: 4.7957162181666857699 E-1694 - 1.9303407259828862543 E-1695*I
Ht120: 3.3749886345563040492 E-1694 - 1.1852708307087768473 E-1694*I
Ht120: 4.3013938297077011689 E-1694 - 4.1657991767869502915 E-1695*I + 4th term
Ht123: 3.3749886345563040492 E-1694 - 1.1852708307087768473 E-1694*I
Ht124: 3.3749886345563040492 E-1694 - 1.1852708307087768473 E-1694*I
x=100,000, t=-10 (n=1000)
Ht11 : -5.1782772744106289584 E-17048 - 3.865936882131069215 E-17083*I
Ht116: -5.1779786130876345548 E-17048 - 1.7259255748943169503 E-17053*I
Ht118: -4.2854647716662700816 E-17048 + 2.9719033828154678941 E-17048*I
Ht120: 1.1975275652615447323 E-17047 - 3.7589859520381745860 E-17048*I
Ht120: 3.1070093194630254766 E-17048 - 6.9742681173932584171 E-17048*I + 4th term
Ht123: 1.1975275652615447323 E-17047 - 3.7589859520381745860 E-17048*I
Ht124: 1.1975275652615447323 E-17047 - 3.7589859520381745860 E-17048*I
x=100,000, t=-1000 (n=10000)
Ht11 : 1.9101901031302031165 E-16983 - 3.353990484195642267 E-17016*I
Ht116: 1.8759019943616834416 E-16983 - 2.8598599727273449902 E-16988*I
Ht118: 1.6605472168615891116 E-16983 - 3.9833683590464675730 E-16983*I
Ht120: 3.9129175191993094057 E-16983 + 6.1715009890766903038 E-16984*I
Ht120: 1.3024309762304852586 E-16984 - 3.7517818712851096311 E-16983*I + 4th term
Ht123: 3.9129175191993094057 E-16983 + 6.1715009890766903038 E-16984*I
Ht124: 3.9129175191993094057 E-16983 + 6.1715009890766903038 E-16984*I
Aren't 1.23 and 1.24 supposed to be just algebraic manipulation without making further approximations?
There is a new multiplication factor being introduced and I had the sign in the exponent wrong. Will edit my previous post with the correct results for 1.23 and 1.24 (done!).
Here are three more data sets that show how the rate of convergence of 1.20 towards 1.18 depends on the orders of the approximation and the choice of parameters. After adding more terms, I now do also get the correct zeros. It seems |t|
should not be chosen too small.
Looks fine.
x=10,000, t=-100 (n=10000)
Ht118: 4.7957162181666857699 E-1694 - 1.9303407259828862543 E-1695*I
Ht120: 3.3749886345563040492 E-1694 - 1.1852708307087768473 E-1694*I
Ht120: 4.3013938297077011689 E-1694 - 4.1657991767869502915 E-1695*I + 4th term
Ht120: 4.9555758288595543024 E-1694 - 2.7488094223870394643 E-1695*I + 5th term
Ht120: 4.8439886882119997079 E-1694 - 1.9628572090957466933 E-1695*I + 6th term
Ht120: 4.8434052253203673751 E-1694 - 1.9291330187064483772 E-1695*I + 7th term
Ht120: 4.8098359148144270163 E-1694 - 1.8813896495574817026 E-1695*I + 8th term
Ht120: 4.8019110214756990356 E-1694 - 1.9109129318386918550 E-1695*I + 9th term
Looks fine.
x=100,000, t=-1000 (n=10000)
Ht118: 1.6605472168615891116 E-16983 - 3.9833683590464675730 E-16983*I
Ht120: 3.9129175191993094057 E-16983 + 6.1715009890766903038 E-16984*I
Ht120: 1.3024309762304852586 E-16984 - 3.7517818712851096311 E-16983*I + 4th term
Ht120: 1.6734901681272645291 E-16983 - 3.8704642526293405195 E-16983*I + 5th term
Ht120: 1.6495331684242613927 E-16983 - 3.9807128150901997572 E-16983*I + 6th term
Ht120: 1.6607907481112338454 E-16983 - 3.9819846131766055395 E-16983*I + 7th term
Ht120: 1.6604736342360083970 E-16983 - 3.9833419487895484291 E-16983*I + 8th term
Ht120: 1.6605510277651525266 E-16983 - 3.9833477589456412504 E-16983*I + 9th term
Looks poor:
x=100,000, t=-10 (n=10000)
Ht118: -4.2854647716662700816 E-17048 + 2.9719033828154678941 E-17048*I
Ht120: 1.1975275652615447323 E-17047 - 3.7589859520381745860 E-17048*I
Ht120: 3.1070093194630254766 E-17048 - 6.9742681173932584171 E-17048*I + 4th term
Ht120: 2.1637808085271054195 E-17047 - 2.4001330918672547944 E-17049*I + 5th term
Ht120: -5.1324059923718988921 E-17048 + 1.7585812848531637960 E-17047*I + 6th term
Ht120: 6.7780461923959073805 E-17048 + 6.1615119931271152256 E-17048*I + 7th term
Ht120: 2.5187661316629112038 E-17048 + 1.2361110109624730039 E-17048*I + 8th term
Ht120: -2.1584397970566555675 E-17048 + 7.7026207135651035988 E-17049*I + 9th term
Rudolph if you plot roots of 1.20 like you did here for 3.18, does it look qualitatively different?
In this paper about evaluating the Riemann zeta function, Hiary discusses cubic exponential sums and in particular using Poisson summation and evaluating the resulting Airy integrals. I don't know how relevant it is for this project but the 'second attempt' wiki page also features some of those words.
Given how poor the Taylor expansion looks, I wrote on the wiki some calculations on what would happen if one did not perform a Taylor expansion. One then rewrites (1.18) to (1.28) (to reduce some oscillation on phase) and applies Poisson summation to split H_t into the sum of "harmonics" f_m(u,N) (with m ranging over half-integers), which should be thought of as a slightly nonlinear Fourier decomposition in the N variable (trying to capture here the observed phenomenon of 1-periodicity in N). The functions f_m(u,N) should, I think, decay at a gaussian rate once m gets large (when I did the naive Taylor expansion, they were gaussians, leading to the theta function in the first attempt). I'd be curious to see how accurate these Fourier expansions capture H_t and its zeroes, and how the f_m(u,N) behave in N for fixed m,u (e.g. m = +- 1/2, +-3.2, and u fixed to some value where something interesting is happening, e.g. u=1 or u=2 seems about right).
p15: Thanks for the link to Hiary's paper. It looks like he is doing a fairly similar thing to what is going on here, namely trying to compute the Riemann-Siegel series on a shortish range of n via Taylor expansion and obtaining cubic exponential sums as a consequence. It looks like the formulae there get quite messy though.
I feel a bit like we're navigating some sort of maze (with the starting point being the original form of H_t(x), and the final destination being an expression where the asymptotic behaviour is manifestly obvious) where there are many ways to make a mis-step that leads to numerically terrible values. I feel like we are at least halfway through that maze though...
The integration library I'm using can't deal with evaluating logarithms at (-infinity, 0] because of how it bounds the error of a quadrature formula.
Ah, right. One can try truncating the X integral to [1, infinity) for (1.29)-(1.32) (and to [1-N, infinity) for (1.33)).
Rudolph if you plot roots of 1.20 like you did here for 3.18, does it look qualitatively different?
The roots for 1.20 look fine for larger negative t (e.g. -3000), however I do get noise for lower negative t (e.g. -10). Haven't made the full plot, but checked and compared various ranges.
Have now also tried the latest approach on Wiki and although 1.28 works perfectly fine, I struggle to get 1.29 working properly (even when removing the log(X/N) issue for X = 0). I do occasionally get closer to the correct values, however it doesn't get sufficiently stable. Hope P15 has more success with these equations in ARB.
I computed the zeros of our 'good old' AB_eff approximation for negative t (A+B terms only, not normalised by B0) and overlaid these with the actual zeros (x=10k..15k, t=-3000..-10). A noisy picture emerges, however the real zeros do very clearly converge to the straight lines. In all the approaches we have tried so far, these straight lines came out pretty clear as well. Any clue on what it is that causes this strong effect?
Thanks for this!
For the A+B approximation I believe it is the final two terms of the A and B series that dominate (and the C term is also of comparable size, which is why dropping it creates such a problem). One can try omitting all but the final terms in the A and B series to see if one gets a similar pattern.
In principle the passage from (1.28) to (1.29) is an exact formula, but it could be that the integrals are decaying very slowly in m. Do you have any numerics on how the various integrals behave in the m aspect? Also the range of integration needs to be large enough to contain the region around N as this is where the integrand should be largest.
That must be it! Have only used sum(n=N-1,N,..)
for both the A and B series and do get exactly the same zeros. Quite an elegant outcome actually; the complex zeros dominated by the first two terms and the real zeros by the last two :)
Will work further on the new integrals later today and share the numerics.
Here's a first draft of numerical values of the new sums. I wouldn't be surprised if the code still has typos like sign errors or missing factors. Also the new formulas are much slower to evaluate because of the nested integral and sum. The upper integration limit is 100 or 200, and the new sums have between 100 and 600 terms.
x: 1000
t: -10
N: 8.885520331
u: 0.1266584795
1.16: (8.464553913e-167 + 2.83109236e-170j)
1.17: (8.598499779e-167 + 2.627479873e-168j)
1.18: (7.555897719e-167 - 8.678666255e-168j)
1.20: (2.571676382e-167 - 2.171715291e-167j)
1.20.1: (7.246619488e-167 + 5.245125732e-169j)
1.24: (2.571676382e-167 - 2.171715291e-167j)
1.29: (7.775766132e-167 - 9.351370373e-168j)
1.31: (2.088718849e-167 - 6.162074531e-167j)
1.33: (1.503883818e-166 - 6.573533723e-167j)
Hmm, the Poisson summation isn't working well at all :-(.
OK, here's something else to try. We can write everything in terms of the u, N variables by the formulae
xt = 4 Pi N^2 t = - u N^2
Using (1.18) (or (3.18)) as a reasonably accurate formula for H_t we have plotted the zeroes roughly in the range u in [0,4] and N in [26,34] back in https://github.com/km-git-acc/dbn_upper_bound/issues/126#issuecomment-450627705 , and observed an apparent periodicity in N. However, in this regime the Taylor expansion is poor.
What I'd like to see is to advance N to a larger range, keeping u fixed (e.g. u in [0,4], N in [100, 110], [200, 210], etc.) and see if this periodic pattern continues. (The range of n summation may have to increase when doing so, as n has to capture the region around N.) If it does, it should eventually reach the point where the Taylor expansions start becoming accurate again, and if we go far enough even the third order term should not be needed. Then the theta function type asymptotics I had previously derived should start working (I've now listed it as (1.39) on http://michaelnielsen.org/polymath1/index.php?title=Second_attempt_at_computing_H_t(x)_for_negative_t ). On the other hand when we tested the theta function we got a different pattern of zeroes. So something has to give... maybe the periodic pattern we are observing actually starts changing as we move to much larger values of N (holding u fixed).
Grateful for any help to properly evaluate the sequence of equations on this Wiki page: http://michaelnielsen.org/polymath1/index.php?title=Polymath15_test_problem#Large_negative_values_of_.5Bmath.5Dt.5B.2Fmath.5D
Below is the pari/gp code I have used so far. At the end of the code is a simple rootfinder that should (approximately) find this sequence of roots:
however from eq 318 onwards it finds many more roots (also at different t, e.g. -100). Have experimented with the integral limits, the integral's fixed parameter, the real precision settings of pari and the size of the sums, but no success yet.
Have re-injected some of the non-zero multiplication factors that can be switched on/off if needed.
One thing I noticed is that after eq 307 the function doesn't seem to be real anymore (see data at the end). Not sure if this should matter though, since I use the real value of Ht for root finding.