Closed sdtaylor closed 5 years ago
This takes out the data intensive steps from run_automated_forecast.py
and puts them in generate_phenology_forecasts.py
.
run_automated_forecast.py
will be run via cron on serenity (as before), with the added step of kicking off a slurm job, which runs generate_phenology_forecasts.py
on the hipergator.
Communication is done via the new class in RemoteRunControl
, which, using ssh via the fabric
package, does the job submission, status check, and file transfer steps.
A status file is used to indicate job completion and where the final product is located.
The less intensive steps, making static maps and syncing with the website, is still done on serenity.
Also updated the config to allow for multiple data folder locations depending on where things are run. since they are different on the hipergator/serenity.
running the memory/processor intensive parts of the pipeline on the hipergator. long overdue...