Closed SamiraArdani-NOAA closed 1 month ago
Should I wait an hour to turn the plots at 1530Z?
Should I wait an hour to turn the plots at 1530Z?
@malloryprow, Ok. Sounds good to me.
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr595/EVS/dev/drivers/scripts/plots/nwps/jevs_nwps_wave_grid2obs_plots.o159291254 COMOUT: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr595/evs/v2.0 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_nwps_wave_grid2obs_plots.159291254.cbqs01
@malloryprow, There was no ERROR in .o file. The COMOUT includes last31days and last90days plots. I copied the files from your test to my own directory and untarred them. The plot job successfully generated time_series and lead_average plots.
Two things I noticed:
First,
- 9 + /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr595/EVS/ush/nwps/evs_wave_leadaverages.sh mkdir: cannot create directory â/lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_nwps_wave_grid2obs_plots.159291254.cbqs01/sfcshpâ: File exists
A check for this directory before making it should get rid of this.
Second is we can probably adjust the resources. It is requesting 128 cpus (ncpus=128
in the driver script), but it looks like only 36 of those are actually being used (mpiexec -np 36
). The memory looks pretty high too. It is 500GB but looks like about ~5GB is being used (update_job_usage: Memory usage: mem=4542056kb
). I think if it is changed to ncpus=36:mem=50G
we should be good. I'm giving some cushion on the memory since we don't have the full 90 days of stats and that will probably bump it up some!
Two things I noticed:
First,
- 9 + /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr595/EVS/ush/nwps/evs_wave_leadaverages.sh mkdir: cannot create directory â/lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_nwps_wave_grid2obs_plots.159291254.cbqs01/sfcshpâ: File exists
A check for this directory before making it should get rid of this.
Second is we can probably adjust the resources. It is requesting 128 cpus (
ncpus=128
in the driver script), but it looks like only 36 of those are actually being used (mpiexec -np 36
). The memory looks pretty high too. It is 500GB but looks like about ~5GB is being used (update_job_usage: Memory usage: mem=4542056kb
). I think if it is changed toncpus=36:mem=50G
we should be good. I'm giving some cushion on the memory since we don't have the full 90 days of stats and that will probably bump it up some!
@malloryprow, Thanks! I did not notice that. I removed the lines that makes sfcshp directory. Those directories are redundant. The memory was adjusted as well.
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr595/EVS/dev/drivers/scripts/plots/nwps/jevs_nwps_wave_grid2obs_plots.o159301894 COMOUT: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr595/evs/v2.0 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_nwps_wave_grid2obs_plots.159301894.cbqs01
@malloryprow, Everything looks good.
Thanks @SamiraArdani-NOAA! Not sure if this was a checklist item on the Fixes and Additions document but if it does be sure to check it off!
Note to developers: You must use this PR template!
Description of Changes
This PR initializes the plots step for EVS-NWPS.
Developer Questions and Checklist
${USER}
where necessary throughout the code.HOMEevs
are removed from the code.dev/drivers/scripts
ordev/modulefiles
have been made in the correspondingecf/scripts
andecf/defs/evs-nco.def
?Testing Instructions