Open rburghol opened 2 years ago
Testing:
/opt/model/p53/p532_alex/bin/make_met_scenario.sh 19840101 20211231 nldas2_20211221 prad_20211221 /opt/model/p53/p532_alex/code/src/prad_met /opt/model/p53/p532c-sova
Test using new cbp exec framework. Successfully copies WDMs over.
cbp make_met_scenario.sh 19840101 20211231 nldas1121 p20211221 /opt/model/p53/p532_alex/code/src/prad_met /opt/model/p53/p532c-sova
Testing with single missing segment:
# get all the data for this grid cell
grid2land.sh 19840101 20201231 /backup/meteorology /backup/meteorology/out/grid_met_csv A51031
# data was bad for 1984, cell x369y99 was empty, so reran
grid2land.sh 19840101 19841231 /backup/meteorology /backup/meteorology/out/grid_met_csv A51031
# bad data in 1986 - totally huge values for ET
grid2land.sh 19860101 19861231 /backup/meteorology /backup/meteorology/out/grid_met_csv A51031
# turn grid data into land segment CSV
a2l_one A51031
# since this is a full time period run, create a summary RNMax file
LongTermAvgRNMax /backup/meteorology/out/lseg_csv/1984010100-2020123123 /backup/meteorology/out/lseg_csv/RNMax 1 A51031
# create WDMs
# wdm_pm_one looks for an hspf.config file to find wdm file paths
wdm_pm_one A51031 1984010100 2020123123 nldas2 harp2021 nldas1221 p20211221
# copy WDMs into project
# run the river segment
cbp run_all.csh p532sova_cal OR2_7670_7840
Testing with a full basin
# run all data conversion
# this is a single call that should cover all land segs in all rivers, and be done no more than once per month.
p5_g2a_all 19840101 20201231 /backup/meteorology /backup/meteorology/out/grid_met_csv
# this is good since we won't have to do it again for a few weeks or so, AND, we only should need to do 2021+
# since data in out/grid_met_csv/
# is stored by year, so only update a single year, the most recent one, ex:
#p5_g2a_all 20200101 20211231 /backup/meteorology /backup/meteorology/out/grid_met_csv
# get list of land segments needed
cd /opt/model/p53/p532c-sova/
segs=`cbp get_landsegs OR7_8490_0000`
for i in $segs; do
# convert raw grid data into CSVs
# no need to call grid2land.sh because we did ALL grids above
# ./grid2land.sh 1984010100 2020123123 /backup/meteorology /backup/meteorology/out/grid_met_csv $i
# convert grid CSVs into land segment CSVs
a2l_one 1984010100 2020123123 /backup/meteorology/out/grid_met_csv /backup/meteorology/out/lseg_csv $i
# update long term averages
LongTermAvgRNMax /backup/meteorology/out/lseg_csv/1984010100-2020123123 /backup/meteorology/out/lseg_csv/RNMax 1 $i
# finally, create a WDM for each land seg
# this script reads the file /etc/hspf.config to get directories.
# later, we will create separate directories to call these scripts from, and each will have their own hspf.config file, allowing us to automatially put the fiels in the right place. For now, the global /etc/hspf.config file is the fallback, and defaults to p532c-sova
wdm_pm_one $i 1984010100 2020123123 nldas2 harp2021 nldas1221 p20211221
done
# Run them
cbp run_all.csh p532sova_2021 OR7_8490_0000
Go up to date (@jdkleiner ):
p5_g2a_all 20210101 20211231 /backup/meteorology /backup/meteorology/out/grid_met_csv
p5_g2a_all 20220101 20220415 /backup/meteorology /backup/meteorology/out/grid_met_csv
Update 6/2022
sudo ./get_nldas_2_date 2022
bash /backup/meteorology/p5_g2a.bash 2022010100 2022123123 /backup/meteorology /backup/meteorology/out/grid_met_csv
ls out/grid_met_csv/2022/ -lhrt|grep "0 J"
Document basic steps, with use examples of the entire workflow here. This includes new, single land segment or grouping (like sova, nova, ...) focused iterations, and is a condensed version of the more complete workflow give here: HARPgroup/HARParchive#62
All NLDAS2 scripts from download to WDM creation for a specific model met/prad scenario:
get_nldas_to_date
- iterate through and retrieve all data available (see model_meteorology/sh/get_nldas_to_date )cd /backup/meteorology/
get_nldas_to_date YYYY [ending jday=today]
get_nldas_to_date 2022
get_nldas_data.bash
(in model_meteorology/sh/get_nldas_data.bash )/etc/cron.daily/deq-drought-model
wget --load-cookies ~/.urs_cookies --auth-no-challenge=on --keep-session-cookies -np -r -NP -R "*.xml" -c -N --content-disposition https://hydro1.gesdisc.eosdis.nasa.gov/data/NLDAS/NLDAS_FORA0125_H.002/[YEAR]/[JULIAN DAY]
wget --load-cookies ~/.urs_cookies --auth-no-challenge=on --keep-session-cookies -np -r -NP -R "*.xml" -c -N --content-disposition https://hydro1.gesdisc.eosdis.nasa.gov/data/NLDAS/NLDAS_FORA0125_H.002/2002/001/
001/
bash /backup/meteorology/p5_g2a.bash 2020010100 2020123123 /backup/meteorology /backup/meteorology/out/grid_met_csv
bash /backup/meteorology/g2a_one.bash 2020010100 2020123123 /backup/meteorology /backup/meteorology/out/grid_met_csv x393y93
.p5_g2a_all
@alexwlowe - this streamlines p5_g2a.bash, using some logic to eliminate the duplication for first year, just a single loop that doesn't care about the time frame - it can handle it../p5_g2a_all 19840101 20201231 /backup/meteorology /backup/meteorology/out/grid_met_csv
NLDAS2_GRIB_to_ASCII
once per year instead of multiple times per year.grid2land.sh 1985010100 2020123123 /backup/meteorology /backup/meteorology/out/grid_met_csv A51031
southern_a2l_timeframe.bash
(see HARPgroup/HARParchive#156 )a2l_one
a2l_one startYYYYMMDDHH endYYYYMMDDHH land_segment
/backup/meteorology/a2l_one 2020010100 2020123123 /backup/meteorology/out/grid_met_csv /backup/meteorology/out/lseg_csv A51035
LongTermAvgRNMax landseg_csv_file_path rnmax_file_output_path num_segs lseg1 lseg2 lseg3...
LongTermAvgRNMax /opt/model/p53/p532_alex/input/unformatted/nldas2/harp2021/1984010100-2020123123 /opt/model/p53/p532_alex/input/unformatted/nldas2/harp2021/RNMax 1 A51175
/backup/meteorology/wdm_generation_allLsegs.bash
wdm_generation_p5.bash
wdm_generation_p6.bash
wdm_pm_one
wdm_pm_one land_segment YYYYMMDDHH YYYYMMDDHH source version
wdm_pm_one A51031 1984010100 2020123123 nldas1221 harp2021
wdm_insert_ALL
Expects a directory of text files for each met parameter to live ininput/unformatted/[data_source]/[version]
- sowdm_pm_one
copies the files from the met source into there (and creates those directories if they don't already exist)/backup/meteorology/out/lseg_wdm/
rather than the sub-directory of a code directory in p532_alex/backup/meteorology/out/lseg_wdm/
, rather thanwdm_pm_one
above. Copy WDMs to a model scenario withmake_met_scenario.sh
/input/scenario/climate/met/[met scenario name]
wdm_flow_csv
:wdm_flow_csv [scenario] [riverseg] [start year] [end year]
cbp wdm_flow_csv CFBASE30Y20180615_vadeq JL1_6770_6850 1984 2020
Rscript $CBP_ROOT/run/export/wdm_export_flow.R [scenario] [landseg] [syear] [eyear] [CBP_EXPORT_DIR] [CBP_ROOT]
Rscript $CBP_ROOT/run/export/wdm_export_flow.R CFBASE30Y20180615_vadeq N51003 1984 2020 /media/model/p6 /opt/model/p6/gb604b
filename="/media/model/p6/out/land/$scenario/eos/${landseg}_0111-0211-0411.csv"
wdm_export_land_flow()
exports separate files for each flow component (111,211,411) for each land use in a land segment.!/bin/bash
start_date=$1 end_date=$2 met_name=$3 prad_name=$4 nldas_dir=$5 model_dir=$6
code to run the WDM creation goes here
move the met WDMs
met_dir="$model_dir/input/scenario/climate/met/$met_name" mkdir $met_dir cp $nldas_dir/met*.wdm $met_dir/
move the prad WDMs
prad_dir="$model_dir/input/scenario/climate/prad/$prad_name" mkdir $prad_dir cp $nldas_dir/prad*.wdm $prad_dir/