nanograv / PINT

PINT is not TEMPO3 -- Software for high-precision pulsar timing
Other
118 stars 101 forks source link

[Feature Request] Single Command for Caching Remote Files for Runs on Machines with No Access to the Internet #1372

Open JPGlaser opened 2 years ago

JPGlaser commented 2 years ago

Hey All,

As I said in last week's call, we should work towards adding a command that grabs the necessary files that normally it would attempt to grab from the web, and instead caches them locally. This would allow PINT to be used on compute nodes which usually have no outside access to the Internet.

Right now, I get around this by running python -c "import pint.toa" and that will nab some files from astropy and cache them for ~ a week.

It would also be nice to force PINT to tell astropy to use old cached files if the user requires it -or- if there is no internet connection, instead of killing the script.

~ Joe G.

dlakaplan commented 2 years ago

This should accept as options a list of telescopes, or perhaps have a default list.

abhisrkckl commented 2 years ago

I think it will be good to download new files (including metadata) only if it is necessary. In principle, we can just look at the latest TOA value and decide if the local files are sufficient.

dlakaplan commented 2 years ago

https://nanograv-pint.readthedocs.io/en/latest/_autosummary/pint.observatory.topo_obs.export_all_clock_files.html#pint.observatory.topo_obs.export_all_clock_files does this if you have loaded in data pertaining to the relevant observatories. So this could be run as the last step in the timing analysis. But it should also be possible to pre-load various observatories.

dlakaplan commented 2 years ago

@JPGlaser : does #1373 do what you want? Is that the interface you want?

dlakaplan commented 2 years ago

Note that this also doesn't deal explicitly with whatever files astropy uses on it's own. that is discussed at https://docs.astropy.org/en/stable/utils/data.html#astropy-data-and-clusters: do you need something to do that as well?

aarchiba commented 2 years ago

This should accept as options a list of telescopes, or perhaps have a default list.

I think for Joe's application the easiest thing to do is just export all the telescope clock corrections PINT knows about.

aarchiba commented 2 years ago

I think it will be good to download new files (including metadata) only if it is necessary. In principle, we can just look at the latest TOA value and decide if the local files are sufficient.

The clock corrections code does this. You only get new clock corrections if you have a TOA past the end of the ones you currently have - or if the repository has flagged your current version of a clock correction file as needing updates (presumably because it contained errors).

aarchiba commented 2 years ago

To make this happen, I think we need:

Then someone setting up a cluster would call the single command to preload the cache, and then set the "never check the internet" flag on the nodes.

aarchiba commented 2 years ago

@JPGlaser : does #1373 do what you want? Is that the interface you want?

I think, from the discussion, that Joe wants everything preloaded into the Astropy cache, rather than exported anywhere else.

aarchiba commented 2 years ago

The way to check this works is to use clear_download_cache, run the (as yet hypothetical) command, list what's in the cache with cache_contents, then do a bunch of pulsar timing (including run the PINT test suite), and use cache_contents again to see if anything has changed.

Issues that will arise:

JPGlaser commented 2 years ago

For completeness, this is an example of the type of things that need to be pulled down during an import:

Singularity> python -c "import pint.toa"
Downloading http://hpiers.obspm.fr/iers/eop/eopc04/eopc04_IAU2000.62-now
|===================================================================| 3.4M/3.4M (100.00%)         0s
Downloading https://hpiers.obspm.fr/iers/bul/bulc/Leap_Second.dat
|===================================================================| 1.3k/1.3k (100.00%)         0s

My astropy cache looks like this:

['[ftp://anonymous:mail%40astropy.org@gdc.cddis.eosdis.nasa.gov/pub/products/iers/finals2000A.all](ftp://anonymous:mail%40astropy%2Eorg@gdc.cddis.eosdis.nasa.gov/pub/products/iers/finals2000A.all)', 

'http://hpiers.obspm.fr/iers/eop/eopc04/eopc04_IAU2000.62-now', 

'https://data.nanograv.org/static/data/ephem/de436.bsp', 

'https://data.nanograv.org/static/data/ephem/de440.bsp', 

'https://hpiers.obspm.fr/iers/bul/bulc/Leap_Second.dat', 

'https://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/de430.bsp', 

'https://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/de432s.bsp', 

'https://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/de440.bsp', 

'https://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/de440s.bsp']
aarchiba commented 2 years ago

The IERS_Auto table in recent versions of Astropy seems to only update if you have new data: https://github.com/astropy/astropy/blob/11b3214f18b74aea5e3f8349e50ae1b09c39d30e/astropy/utils/iers/iers.py#L756