USGS-R / delaware-model-prep

Data and scripts for collecting and formatting data in the Delaware River Basin in prep for ML and DA modeling
2 stars 13 forks source link

Reservoir inflow and outflow data #83

Closed aappling-usgs closed 3 years ago

aappling-usgs commented 3 years ago

Pulls observations and SNTemp predictions for inflows and outflows of Cannonsville and Pepacton reservoirs.

All the new targets are local, which means anybody else would have to rebuild them. But they're also all in 9_collaborator_data with no non-reservoir-related downstream dependencies, so they shouldn't interfere with others' ability to build other targets. I plan to do data sharing not with Drive but with ScienceBase via the res-temperature-data-sharing repository.

aappling-usgs commented 3 years ago

I've rebuilt everything using an updated version of 2_observations/in/daily_temperatures.rds.

See https://github.com/USGS-R/delaware-model-prep/issues/84 for an issue that surfaced (again) during the rebuild; I currently think the solution is to remove 3_predictions/tmp/control/delaware.control.ind and build/status/M19wcmVkaWN0aW9ucy90bXAvY29udHJvbC9kZWxhd2FyZS5jb250cm9sLmluZA.yml from the git-versioned repo, but we can try that in a separate commit.

I had to force-rebuild res_inflow_ids, which I'm scratching my head about because I just manually added 2_observations/in/daily_flow.rds and then scipiper built 2_observations/out/all_drb_temp_obs.rds.ind earlier today; but for some reason when I force-rebuilt 2_observations/out/all_drb_temp_obs.rds.ind, its hash changed again even though it was documenting the same input hashes as before. After that, scmake('res_inflow_ids') did the right thing and rebuild yet again. I don't get it.

On rebuild of res_inflow_ids, we do get "01415460" "01413500" "01415000" "01414500" as the new Pepacton inflows, up from just "01413088" (where 01413088 was replaced by 01413500 because 01413500 is closer to the reservoir along the same flowpath). That seems good. Other inflow site lists stay the same.

Trying to build 9_collaborator_data/res/res_io_obs.feather again but I keep getting Error: vector memory exhausted (limit reached?) and seeing my total memory use bump up against my system limit of 32 Gb. Will restart and try again.

aappling-usgs commented 3 years ago

Reduced memory consumption. 2_observations/in/daily_flow.rds takes up 9.34 GB when read in!! Now we're getting more temperature data:

can_io_temp

pep_io_temp