km-git-acc / dbn_upper_bound

Computational effort to upper bound the de Bruijn-Newman constant as part of a Polymath project
Other
13 stars 12 forks source link

Distributed Processing List #31

Open dhjpolymath opened 6 years ago

dhjpolymath commented 6 years ago

A job has been started with the new code (sample_afe_calc) on 14 2.4-2.6 GHz processors for values t=.22,.40,.41,.42,.43,.44.

km-git-acc commented 6 years ago

@dhjpolymath, @sujitnair
Wait, sample_afe_calc uses Ht_AFE, which is the older eqn

there is a new eqn which Terry recently derived (implemented as Ht_AFE_ABC) which is much more accurate, and should be the one used. It was part of the merge request, but now due to the confusion, I am merging it into master.

Please use Ht_AFE_ABC for large scale computations.

km-git-acc commented 6 years ago

@dhjpolymath just merged the pending pull request. Please check the file sample_afe_abc_calc.py

dhjpolymath commented 6 years ago

@km-git-acc The t=.22 computations have been canceled. In two hours about 6000 zeros were calculated for t=.40,.41,.42,.43,.44 with one machine (Xeon 2.6 Ghz 12 processor system). t=.40,.41,.42,.43,.44 will begin to be computed using the sample_afe_abc_calc.py file.

km-git-acc commented 6 years ago

@dhjpolymath Great. Also, although it should already be the case that the ABC function is the only one being called in the abc_calc file from mputility (it is called twice, once in the curr_eval line and then again in the approx root line), please just double check to avoid any rework.

km-git-acc commented 6 years ago

@dhjpolymath I just received a message that you were facing some import errors. Downloaded the latest repo to my PC and had some issues as well. Could be how it recognizes folders, but not sure.

One simple way to solve it was to go into the actual python folder and run the code from there. Also, in that case, the line _from dbn_upper_bound.python.mputility import Ht_AFEABC has to be replaced with _from mputility import Ht_AFEABC

Marking @sujitnair as well

sujitnair commented 6 years ago

@dhjpolymath

You need to run the files from the project root folder. So, it should be

python dbn_upper_bound/python/sample_afe_calc.py

km-git-acc commented 6 years ago

https://stackoverflow.com/questions/19972669/python-not-finding-module https://stackoverflow.com/questions/40683720/python-does-not-recognize-a-module-which-is-set-by-pythonpath Bit too much depth about importing modules than I would like (arcane python stuff), but see if this helps for a longer term fix.

dhjpolymath commented 6 years ago

@km-git-acc @sujitnair The version of mputility.py needed to be updated in the directory. About 1000 zeros have been computed for t=.40,.41,.42,.43,.44 in the last 20 minutes.

km-git-acc commented 6 years ago

@sujitnair Tried again (on Windows), python dbn_upper_bound/python/sample_afe_abc_calc.py

Traceback (most recent call last): File "dbn_upper_bound/python/sample_afe_abc_calc.py", line 11, in from dbn_upper_bound.python.mputility import Ht_AFE_ABC ImportError: No module named 'dbn_upper_bound'

sujitnair commented 6 years ago

@km-git-acc @dhjpolymath

Can you let me know if

python dbn_upper_bound/python/sample_afe_calc.py

is giving any issue?

sujitnair commented 6 years ago

I assume you are inside your virtual environment, right? What is the output of

pip --version

dhjpolymath commented 6 years ago

@sujitnair python dbn_upper_bound/python/sample_afe_calc.py is outdated The current version is python dbn_upper_bound/python/sample_afe_abc_calc.py

km-git-acc commented 6 years ago

@sujitnair

as @dhjpolymath mentioned, sample_afe_abc_calc.py is the file to be run, but I ran the sample_afe_calc.py too and getting similar errors (could be just my system)

python dbn_upper_bound/python/sample_afe_calc.py

Traceback (most recent call last): File "dbn_upper_bound/python/sample_afe_calc.py", line 11, in from dbn_upper_bound.python.mputility import Ht_AFE ImportError: No module named 'dbn_upper_bound'

pip --version gives pip 9.0.1 from ...\WinPython-64bit-3.5.3.0Qt5\python-3.5.3.amd64\lib\site-packages (python 3.5)

sujitnair commented 6 years ago

Oops sorry. Yeah, I meant dbn_upper_bound/python/sample_afe_abc_calc.py

This is my setup and works fine for me.

  1. I am inside my python virtual environment. The output of "pip --version" is pip 9.0.1 from /Users/sujitnair/go/src/github.com/km-git-acc/dbn_upper_bound/venv/lib/python3.6/site-packages (python 3.6)
  2. The current folder is the project root folder
  3. The command is "python dbn_upper_bound/python/sample_afe_abc_calc.py"
km-git-acc commented 6 years ago

Also, mine is not a virtual environment since its Windows.

sujitnair commented 6 years ago

That could be it then. When you are in the project root folder, can you do python3.6 -m venv venv

This should create a virtual env in the project root folder.

km-git-acc commented 6 years ago

@dhjpolymath you mentioned that 1000 zeroes have been computed for various t after updating mputility. So is the issue sorted from your side for now?

dhjpolymath commented 6 years ago

@km-git-acc yes, it is still running. Updating the mputility.py file manually allowed it to work. It is possible an older version was the cause of issues previously.

sujitnair commented 6 years ago

@km-git-acc https://docs.python.org/3/library/venv.html

km-git-acc commented 6 years ago

I ran python -m venv venv and it worked, but python dbn_upper_bound/python/sample_afe_abc_calc.py gave the same error as earlier. But no worries. I will try to install linux and manage things manually till then.

sujitnair commented 6 years ago

@km-git-acc Once you have venv, you need to get inside the virtual environment using

"source venv/bin/activate"

To check if you are really inside the venv, do a

"pip --version"

To get out of a virtual environment, use

"deactivate"

I am pretty sure it should be fine!

dhjpolymath commented 6 years ago

Attached is a link to a dropbox with zeros for t=.41 computed until the abc file was introduced. Updates will come as the new computations progress. https://www.dropbox.com/s/ndagog6ihu7kys2/Ht_roots_vlargez_0.41.csv?dl=0

dhjpolymath commented 6 years ago

Inputs t=.45,.46,.47 have started being computed. Inputs t=.40,.41,.42,.43,.44 are computed to ~3000 zeroes currently

km-git-acc commented 6 years ago

@dhjpolymath @sujitnair There seems to have been a typo in the calculation of the A and B parts in Ht_AFE_ABC In one of the conditions it should have been t>0 instead of t==0, so only the t=0 calculations for A and B would have been correct. Also, only the C part would have been computed correctly :(

Firstly, apologies since I should have caught the error. I have updated it in mputility.py I guess from next time onwards we should go with a fine comb over atleast the majorly used functions.

sujitnair commented 6 years ago

@km-git-acc Thanks for letting us know. No worries, these things happen. We are just starting :)

Moving forward,

  1. We need to run the code by pylint before a PR is sent. I personally had a hard time parsing it by trying to keep things parallel with Terry's notes in a parallel window.
  2. At least one person has to give a thumbs up before the merge.

This should go a long way. So let's follow this moving forward.

Cheers!

mariotrevi commented 6 years ago

@sujitnair I'm having trouble importing mp, and I'm in the virtual environment: traceback: Traceback (most recent call last): File "dbn_upper_bound/python/sample_afe_abc_calc.py", line 10, in from mpmath import mp ModuleNotFoundError: No module named 'mpmath'

David

dhjpolymath commented 6 years ago

@km-git-acc In about 7 hours 10,000 zeros were computed for some (seven) values of t for whatever that info is worth.

sujitnair commented 6 years ago

@mariotrevi

Sorry about that. The python_requirements.txt file needs the following line in it mpmath==1.0.0

You need to do pip install mpmath to get going.

mariotrevi commented 6 years ago

@sujitnair Thanks !

dhjpolymath commented 6 years ago

Here are some of the files without the most recent update

https://www.dropbox.com/s/6pwvjk1rhuv16ew/Ht_roots_vlargez1_0.4.csv?dl=0 https://www.dropbox.com/s/u9wddy58g8xmmv4/Ht_roots_vlargez1_0.41.csv?dl=0 https://www.dropbox.com/s/1zhy93c0rs7c2ch/Ht_roots_vlargez1_0.42.csv?dl=0 https://www.dropbox.com/s/7bbxoogh818d47q/Ht_roots_vlargez1_0.43.csv?dl=0 https://www.dropbox.com/s/3da8k4i75w250mq/Ht_roots_vlargez1_0.44.csv?dl=0 https://www.dropbox.com/s/5zljyymuh9oo3fb/Ht_roots_vlargez1_0.45.csv?dl=0

dhjpolymath commented 6 years ago

10,000 zeroes should be computed in 7 hours or so for t=.40-.47 using the new code. This is not very close to the million zeroes. The t values calculations may need to be broken up into sub-problems or the code implemented for a more sophisticated system.

mariotrevi commented 6 years ago

For t=0, my "refined zeros" using A+B-C and a variant of Newton-Raphson for zero-location, are in 6-decimal agreement [ = the printing precision to 6D] with zeros from sample_afe_abc_calc, doing a spot check. So it's going well !

sujitnair commented 6 years ago

@dhjpolymath No worries on the time taken to get the 10k zeroes. For now, the priority should be to make the code robust and make sure there are no typos or errors. @mariotrevi is also help us verify the numbers with his code. So far, it is looking good!

We can figure out a way to parallelize soon :)

km-git-acc commented 6 years ago

By the way I am computing zeroes for t=0.5 with the latest code, and have reached z = 33000. Still some way to go (will be uploading once it reaches 50k), but is the dropbox folder accessible by everyone? How can I upload my file in the same folder?

km-git-acc commented 6 years ago

@mariotrevi Great to have a confirmation. The sample_afe_abc_calc and mputility should have worked for t=0 even with the typo, and it seems to be happening.

With the typo fixed in mputility.py (this part was updated around 15 hours back), hope everything runs fine for t>0 as well.

km-git-acc commented 6 years ago

@dhjpolymath The speed does seem slow compared to the ask of evaluating million or more zeroes. As @sujitnair mentioned we can tackle that in various ways in the medium term. But if you want a short term boost, one way would be to enable something called cython with python. Will be trying that out myself.

dhjpolymath commented 6 years ago

Remote connection disconnected after about 12 hours and the computations for t=.40-.47 stopped. The following are the relevant files as far as they were computed (at about ~9000 zeros). https://www.dropbox.com/sh/fcpo617rmdeqlb6/AABGRvHJhbf3QWRQAPSuXd_xa?dl=0

dhjpolymath commented 6 years ago

Drop Box has a 2GB limit for personal (free) accounts currently.

sujitnair commented 6 years ago

@dhjpolymath

If you pickle your CSV file, what is the file size?

dhjpolymath commented 6 years ago

@sujitnair There are two files that serve as output from the most recent code. One is the zeros which for 10,000 zeros and a single value of t is about 500 kb. The eval files however are much larger (~100 mb for computations up to 1000 zeros). These might compress to about 40 mb.

With compression, (ceteris paribus) a million zeros would give a eval file of about 4GB. The zero files can fit under the 2 GB dropbox requirement.

km-git-acc commented 6 years ago

Our primary interest is the roots file, but the eval files contain a lot of good information (how H_t behaves in general and not just near the roots), and we should find some way to preserve this for any future analysis. I will look into the Google drive option.

mariotrevi commented 6 years ago

Gram's law, ref.: http://mathworld.wolfram.com/GramsLaw.html , is or can be used to detect many or most of the zeros of zeta. Maybe there's something similar for the H_t . Anyway, since we're focused on calculating H_t zeroes, we could leave Gram's law for later...

km-git-acc commented 6 years ago

@mariotrevi Interesting. Given that the zeros locations of H_t become more regular with T and finally almost behave as if they were in an arithmetic progression, a modified version of Gram's law should be even more applicable for H_t. I will try this out.

mariotrevi commented 6 years ago

KM,

On 02/09/2018 08:47 AM, KM wrote:

@mariotrevi https://github.com/mariotrevi Interesting. Given that the zeros locations of H_t become more regular with T and finally almost behave as if they were in an arithmetic progression, a modified version of Gram's law should be even more applicable for H_t. I will try this out.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/31#issuecomment-364437732, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If86J1l6mLZUrUzKoECQ3OoyAA6oyks5tTExkgaJpZM4R9IMz.

In the case t = 0, and going back to zeta zeroes, I solved numerically:

(x/(2pi))*log( x/(2pi) )  - x/(2pi) - 1/8 = 0 , and obtained: x ~= 17.8478

According to Mathworld, the first Gram point, g_0, is at:

g_0 ~= 17.8456 , so I was pretty close.

for g_k (approximation), one solves:

(x/(2pi))*log( x/(2pi) )  - x/(2pi) - 1/8 = k for k = 0, 1, 2, etc.

As an example, [g_0 , g_1] brackets "conveniently" the second non-trivial zero of zeta.

References to Mathworld: http://mathworld.wolfram.com/GramPoint.html

and

http://mathworld.wolfram.com/GramsLaw.html

I also had a look at "Rosser's rule", and my take on this is that they have all helped to reduce the time necessary to verify RH up to some prescribed height...

One can do a search on, say: Gram's Law and the Rosser Rule zeta

e.g. browse through Bren't paper at:

http://maths-people.anu.edu.au/~brent/pd/rpb047.pdf

David

km-git-acc commented 6 years ago

Thanks. Will read up on this. I was wondering will a simpler version of this work for larger T heights? For eg. near some height T (say 1 million), we note how much the zero gaps vary and if the variance is pretty low compared to the average, we do something like this:

suppose the latest zero we find is 1,000,001. It would mean if we had used the secant method or some other method between 1,000,001+-(avg zero gap)/2, we would have recovered 1,000,001

so we just use the secant like method between such contiguous intervals, [z_n-avggap/2, z_n+avggap/2], [z_n+avggap/2, z_n+3avggap/2], [z_n+3avggap/2, z_n+5*avggap/2], etc.. we would recover zn, z(n+1), z_(n+2)

we can also vary the expected avggap as T/N_t(T) or even more accurate [T - 0.9T] / [N_t(T) - N_t(0.9T)]

Here it helps a lot that increasing t and increasing T make the zeroes more regularly spaced. One cant absolutely guarantee with this method that all the zeroes will be recovered (what if there is suddenly a large gap?), but near large enough T (and here the large seems to be moderate enough to be within computational feasibility), it should capture 99.999999...% of the zeroes.

mariotrevi commented 6 years ago

KM,

On 02/09/2018 11:52 AM, KM wrote:

Thanks. Will read up on this. I was wondering will a simpler version of this work for larger T heights? For eg. near some height T (say 1 million), we note how much the zero gaps vary and if the variance is pretty low compared to the average, we do something like this:

suppose the latest zero we find is 1,000,001. It would mean if we had used the secant method or some other method between 1,000,001+-(avg zero gap)/2, we would have recovered 1,000,001

so we just use the secant like method between such contiguous intervals, [z_n-avggap/2, z_n+avggap/2], [z_n+avggap/2, z_n+3/avggap/2], [z_n+3/avggap/2, z_n+5*avggap/2], etc.. we would recover zn, z(n+1), z_(n+2)

It's plausible that new heuristics can be found for , say, t >= 0.4 , which are analogous to "Gram's Law" (which has exceptions, but is still useful).

Perhaps we're looking for points where | H_t (x) | is expected to be far from zero, so about half-way between the "expected" locations of zeros zk and z{k+1} of H_{t}.

David

we can also vary the expected avggap as T/N_t(T) or even more accurate [T - 0.9T] / [N_t(T) - N_t(0.9T)]

Here it helps a lot that increasing t and increasing T make the zeroes more regularly spaced. One cant absolutely guarantee with this method that all the zeroes will be recovered (what if there is suddenly a large gap?), but near large enough T (and her the large seems to be moderate enough to be within computational feasibility), it should capture 99.999999...% of the zeroes.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/31#issuecomment-364490848, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_IfzC_-zvXSPQctEB5wiWoun82UW0cks5tTHfDgaJpZM4R9IMz.

mariotrevi commented 6 years ago

On 02/09/2018 11:52 AM, KM wrote:

Thanks. Will read up on this. I was wondering will a simpler version of this work for larger T heights? For eg. near some height T (say 1 million), we note how much the zero gaps vary and if the variance is pretty low compared to the average, we do something like this:

suppose the latest zero we find is 1,000,001. It would mean if we had used the secant method or some other method between 1,000,001+-(avg zero gap)/2, we would have recovered 1,000,001

so we just use the secant like method between such contiguous intervals, [z_n-avggap/2, z_n+avggap/2], [z_n+avggap/2, z_n+3/avggap/2], [z_n+3/avggap/2, z_n+5*avggap/2], etc.. we would recover zn, z(n+1), z_(n+2)

we can also vary the expected avggap as T/N_t(T) or even more accurate [T - 0.9T] / [N_t(T) - N_t(0.9T)]

Here it helps a lot that increasing t and increasing T make the zeroes more regularly spaced. One cant absolutely guarantee with this method that all the zeroes will be recovered (what if there is suddenly a large gap?), but near large enough T (and her the large seems to be moderate enough to be within computational feasibility), it should capture 99.999999...% of the zeroes.

Including the "t"-contribution, the equation for the Gram-like points of H_t  might look something like:

(T/(4pi))log( T/(4pi) ) - T/(4pi) - 1/8 + (t/16)log( T/(4pi) ) = k  [ Eqn. 1]

for k = 0, 1, 2, 3, 4, ....

If we call these the G_{t, k} Gram-like points, then there should be economical approximate formulae since log( T/(4pi) ) increases very very slowly with T.

The right equation could be [ Eqn. 1] above, or a tweak of it (hopefully...).

Also, your approach, at this early stage, seems equally promising.

David

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/31#issuecomment-364490848, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_IfzC_-zvXSPQctEB5wiWoun82UW0cks5tTHfDgaJpZM4R9IMz.

sujitnair commented 6 years ago

@km-git-acc

Edwards is a good source for Gram points.

And mpmath has it built in http://mpmath.org/doc/current/functions/zeta.html#mpmath.grampoint

km-git-acc commented 6 years ago

@sujitnair Thanks. Will check it out.

@mariotrevi Just had a question which you may have run into as well. If you check Terry's comment here https://terrytao.wordpress.com/2018/02/02/polymath15-second-thread-generalising-the-riemann-siegel-approximate-functional-equation/#comment-492401

he says for y=0.4 we should evaluate |A+B|/B0 and |C|/B0, and also |H_t - (A+B)|/B_0 and |H_t - (A+B-C)|/B_0 upto a large height. The first two seem straightforward, but for the estimates involving H_t (independent of the ABC approach) at such large height, how would one go about it? We only have the integration approach which is extremely slow at such heights. Or is there some other way?

mariotrevi commented 6 years ago

@KM

On 02/09/2018 10:16 PM, KM wrote:

@sujitnair https://github.com/sujitnair Thanks. Will check it out.

@mariotrevi https://github.com/mariotrevi Just had a question which you may have run into as well. If you check Terry's comment here https://terrytao.wordpress.com/2018/02/02/polymath15-second-thread-generalising-the-riemann-siegel-approximate-functional-equation/#comment-492401

he says for y=0.4 we should evaluate |A+B|/B0 and |C|/B0, and also |H_t - (A+B)|/B_0 and |H_t - (A+B-C)|/B_0 upto a large height. The first two seem straightforward, but for the estimates involving H_t (independent of the ABC approach) at such large height, how would one go about it? We only have the integration approach which is extremely slow at such heights. Or is there some other way?

I'll read Terry's comment. With the case of verifying RH, I think Glen Pugh's Master's thesis at:

https://web.viu.ca/pughg/thesis.d/masters.thesis.pdf

is a good reference. Chapter 4 is on the verification of the Riemann Hypothesis.

In Riemann-Siegel (for zeta) , there are extra terms after the "C" term, although at the moment, we don't have a more accurate "generalized Riemann-Siegel" for calculating or estimating H_t.

So, I'll think about it, and perhaps you can provide feedback...

David

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/km-git-acc/dbn_upper_bound/issues/31#issuecomment-364621960, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_If3RRnSBJt3XfihDT2lTdMyPIU8q2ks5tTQoCgaJpZM4R9IMz.