Closed BinbinZhou-NOAA closed 5 days ago
dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_precip_spatial_plots.sh (and ecf/scripts/plots/mesoscale/jevs_mesoscale_sref_precip_spatial_plots.ecf) are using select=1:ncpus=2
, but it doesn't look like in scripts/plots/mesoscale/exevs_mesoscale_sref_precip_spatial_plots.sh there is an sort of parallel process running, unless I'm missing something.
If there are no parallel processing happening then the resources should be select=1:ncpus=1
.
Jobs are submitted and running
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/stats/mesoscale/jevs_mesoscale_sref_grid2obs_stats.o206766930 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_grid2obs_stats.206766930.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/stats/mesoscale/jevs_mesoscale_sref_precip_stats.o206766933 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_precip_stats.206766933.dbqs01
Mallory,
Both jobs have not been completed yet. If they are done, I'll check them. I'll correct the issues you raised.
Thanks!
Binbin
On Mon, Nov 18, 2024 at 2:23 PM Mallory Row @.***> wrote:
stats
Jobs are submitted and running jevs_mesoscale_sref_grid2obs_stats.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/stats/mesoscale/jevs_mesoscale_sref_grid2obs_stats.o206766930 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_grid2obs_stats.206766930.dbqs01 jevs_mesoscale_sref_precip_stats.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/stats/mesoscale/jevs_mesoscale_sref_precip_stats.o206766933 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_precip_stats.206766933.dbqs01
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#issuecomment-2483914553, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFO3PMG6DTCXDGBPNED2BI5DJAVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBTHEYTINJVGM . You are receiving this because you authored the thread.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
Mallory,
Bothe grid2obs and precip stat generation jobs were completed. There are no ERRORs and WARNINGs in the log files, and the final stat files are also good.
Thanks!
Binbin
On Mon, Nov 18, 2024 at 2:23 PM Mallory Row @.***> wrote:
stats
Jobs are submitted and running jevs_mesoscale_sref_grid2obs_stats.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/stats/mesoscale/jevs_mesoscale_sref_grid2obs_stats.o206766930 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_grid2obs_stats.206766930.dbqs01 jevs_mesoscale_sref_precip_stats.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/stats/mesoscale/jevs_mesoscale_sref_precip_stats.o206766933 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_precip_stats.206766933.dbqs01
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#issuecomment-2483914553, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFO3PMG6DTCXDGBPNED2BI5DJAVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBTHEYTINJVGM . You are receiving this because you authored the thread.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
/lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/stats/mesoscale/jevs_mesoscale_sref_grid2obs_stats.o206766930 shows resources_used.walltime = 00:25:11
with the job specifying walltime = 00:30:00
. We should bump this up to 35 minutes? Both in the dev driver and ecf script.
Sure, the walltime has been increased to 35 min for grid2obs stat job.
Binbin
On Tue, Nov 19, 2024 at 7:33 AM Mallory Row @.***> wrote:
/lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/stats/mesoscale/jevs_mesoscale_sref_grid2obs_stats.o206766930 shows resources_used.walltime = 00:25:11 with the job specifying walltime = 00:30:00. We should bump this up to 35 minutes? Both in the dev driver and ecf script.
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#issuecomment-2485581459, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFOQKY3KGJ3KXOAOF632BMVYHAVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBVGU4DCNBVHE . You are receiving this because you authored the thread.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
Thanks! I'll move on to the plots jobs.
Jobs submitted. COMOUT is /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/evs/v2.0/plots/mesoscale
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cape_last90days_plots.o206868692 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cape_last90days_plots.206868692.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_grid2obs_last90days_plots.o206868728 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_grid2obs_last90days_plots.206868728.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_td2m_last90days_plots.o206868780 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_td2m_last90days_plots.206868780.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cloud_last90days_plots.o206868699 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cloud_last90days_plots.206868699.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_precip_last90days_plots.o206868752 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_precip_last90days_plots.206868752.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cnv_last90days_plots.o206868724 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cnv_last90days_plots.206868724.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_precip_spatial_plots.o206868772 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_precip_spatial_plots.206868772.dbqs01
Mallory,
All of the SREF plotting jobs are good with correct output graphic files and no ERROR and WARNING messages in their log files.
Thanks!
Binbin
On Tue, Nov 19, 2024 at 9:54 AM Mallory Row @.***> wrote:
plots
Jobs submitted. COMOUT is /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/evs/v2.0/plots/mesoscale jevs_mesoscale_sref_cape_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cape_last90days_plots.o206868692 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cape_last90days_plots.206868692.dbqs01 jevs_mesoscale_sref_grid2obs_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_grid2obs_last90days_plots.o206868728 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_grid2obs_last90days_plots.206868728.dbqs01 jevs_mesoscale_sref_td2m_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_td2m_last90days_plots.o206868780 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_td2m_last90days_plots.206868780.dbqs01 jevs_mesoscale_sref_cloud_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cloud_last90days_plots.o206868699 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cloud_last90days_plots.206868699.dbqs01 jevs_mesoscale_sref_precip_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_precip_last90days_plots.o206868752 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_precip_last90days_plots.206868752.dbqs01 jevs_mesoscale_sref_cnv_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cnv_last90days_plots.o206868724 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cnv_last90days_plots.206868724.dbqs01 jevs_mesoscale_sref_precip_spatial_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_precip_spatial_plots.o206868772 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_precip_spatial_plots.206868772.dbqs01
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#issuecomment-2485939043, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFKKHTHPLCJDWDMLHQT2BNGJVAVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBVHEZTSMBUGM . You are receiving this because you authored the thread.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
I reviewed the jobs too and noticed a few things:
The MPMD directory structure for jevs_mesoscale_sref_td2m_last90days_plots is a little odd. They are all under /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_td2m_last90days_plots.206868780.dbqs01/plots/lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/evs/v2.0/plots/mesoscale/atmos.20241117/restart/90/sref_td2m_plots
. Whereas for jevs_mesoscale_sref_td2m_last90days_plots it is /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cape_last90days_plots.206868692.dbqs01/plots
.
The plots jobs are requesting a lot of memory (most around ~300GB). I think the most a job is using is 80GB.
The plot names are a tad of from Naming of EVS Graphics Images
. For example, parts of the graphic name are threshmean.valid_12z
and fhrmean.valid_18z
. These should be threshmean.valid12z
and fhrmean.valid18z
Mallory,
Thanks for further checking. I'll update them later on. I have a doctor appointment around 1:00 today.
Binbin
On Tue, Nov 19, 2024 at 10:47 AM Mallory Row @.***> wrote:
I reviewed the jobs too and noticed a few things:
1.
The MPMD directory structure for jevs_mesoscale_sref_td2m_last90days_plots is a little odd. They are all under /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_td2m_last90days_plots.206868780.dbqs01/plots/lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/evs/v2.0/plots/mesoscale/atmos.20241117/restart/90/sref_td2m_plots. Whereas for jevs_mesoscale_sref_td2m_last90days_plots it is /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cape_last90days_plots.206868692.dbqs01/plots . 2.
The plots jobs are requesting a lot of memory (most around ~300GB). I think the most a job is using is 80GB. 3.
The plot names are a tad of from Naming of EVS Graphics Images
https://docs.google.com/document/d/1ZVfHzhzLnyDpqVoE0hB4mAluUDZ87W5hIqsw03lTLXY/edit?tab=t.0. For example, parts of the graphic name are threshmean.valid_12z and fhrmean.valid_18z. These should be threshmean.valid12z and fhrmean.valid18z
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#issuecomment-2486080266, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFKKWJVP4NFTXF4M6LL2BNMPLAVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBWGA4DAMRWGY . You are receiving this because you authored the thread.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
Mallory,
A question for the plot naming issue in the Naming of EVS Graphics Images https://docs.google.com/document/d/1ZVfHzhzLnyDpqVoE0hB4mAluUDZ87W5hIqsw03lTLXY/edit?tab=t.0 document: In the list of components, there is no mesoscale_ens, for SREF, still "sref" as the component name in the graphic file names? As for cam_ens, there are 2 models for cam_ens: href and refs. So I suggest still using href or refs for the cam ensemble component name. Any suggestions?
Thanks
Binbin
On Tue, Nov 19, 2024 at 10:47 AM Mallory Row @.***> wrote:
I reviewed the jobs too and noticed a few things:
1.
The MPMD directory structure for jevs_mesoscale_sref_td2m_last90days_plots is a little odd. They are all under /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_td2m_last90days_plots.206868780.dbqs01/plots/lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/evs/v2.0/plots/mesoscale/atmos.20241117/restart/90/sref_td2m_plots. Whereas for jevs_mesoscale_sref_td2m_last90days_plots it is /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cape_last90days_plots.206868692.dbqs01/plots . 2.
The plots jobs are requesting a lot of memory (most around ~300GB). I think the most a job is using is 80GB. 3.
The plot names are a tad of from Naming of EVS Graphics Images
https://docs.google.com/document/d/1ZVfHzhzLnyDpqVoE0hB4mAluUDZ87W5hIqsw03lTLXY/edit?tab=t.0. For example, parts of the graphic name are threshmean.valid_12z and fhrmean.valid_18z. These should be threshmean.valid12z and fhrmean.valid18z
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#issuecomment-2486080266, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFKKWJVP4NFTXF4M6LL2BNMPLAVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBWGA4DAMRWGY . You are receiving this because you authored the thread.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
I'd say yes to keep it that way. Was this discussed during v1.0 development?
Thoughts @AliciaBentley-NOAA @AndrewBenjamin-NOAA?
@BinbinZhou-NOAA @malloryprow I think that we can keep sref here. If that's acceptable to NCO, I don't think we need to change it now. Potentially something to revisit in a future EVS v3.0 or something, but not critical to change for EVS v2.0. Thanks!
CC @AndrewBenjamin-NOAA
Alicia, Mallory,
Sure, the "sref" will be kept. I think "href" and "refs" should be kept as well.
Binbin
On Tue, Nov 19, 2024 at 2:41 PM Alicia Bentley @.***> wrote:
@BinbinZhou-NOAA https://github.com/BinbinZhou-NOAA @malloryprow https://github.com/malloryprow I think that we can keep sref here. If that's acceptable to NCO, I don't think we need to change it now. Potentially something to revisit in a future EVS v3.0 or something, but not critical to change for EVS v2.0. Thanks!
CC @AndrewBenjamin-NOAA https://github.com/AndrewBenjamin-NOAA
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#issuecomment-2486596789, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFLU7VHMU7PQF66SNLT2BOH5LAVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBWGU4TMNZYHE . You are receiving this because you were mentioned.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
Mallory,
These 3 issues have been addressed in the new commit. Please re-run the plotting jobs for further checking.
Thanks!
Binbin
On Tue, Nov 19, 2024 at 10:47 AM Mallory Row @.***> wrote:
I reviewed the jobs too and noticed a few things:
1.
The MPMD directory structure for jevs_mesoscale_sref_td2m_last90days_plots is a little odd. They are all under /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_td2m_last90days_plots.206868780.dbqs01/plots/lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/evs/v2.0/plots/mesoscale/atmos.20241117/restart/90/sref_td2m_plots. Whereas for jevs_mesoscale_sref_td2m_last90days_plots it is /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cape_last90days_plots.206868692.dbqs01/plots . 2.
The plots jobs are requesting a lot of memory (most around ~300GB). I think the most a job is using is 80GB. 3.
The plot names are a tad of from Naming of EVS Graphics Images
https://docs.google.com/document/d/1ZVfHzhzLnyDpqVoE0hB4mAluUDZ87W5hIqsw03lTLXY/edit?tab=t.0. For example, parts of the graphic name are threshmean.valid_12z and fhrmean.valid_18z. These should be threshmean.valid12z and fhrmean.valid18z
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#issuecomment-2486080266, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFKKWJVP4NFTXF4M6LL2BNMPLAVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBWGA4DAMRWGY . You are receiving this because you authored the thread.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
Jobs submitted. COMOUT is /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/evs/v2.0/plots/mesoscale
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cape_last90days_plots.o206948123 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cape_last90days_plots.206948123.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_grid2obs_last90days_plots.o206948125 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_grid2obs_last90days_plots.206948125.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_td2m_last90days_plots.o206948136 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_td2m_last90days_plots.206948136.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cloud_last90days_plots.o206948168 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cloud_last90days_plots.206948168.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_precip_last90days_plots.o206948233 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_precip_last90days_plots.206948233.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cnv_last90days_plots.o206948283 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cnv_last90days_plots.206948283.dbqs01
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_precip_spatial_plots.o206948290 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_precip_spatial_plots.206948290.dbqs01
Mallory,
There is a small issue in the output graphic file names for the job jevs_mesoscale_sref_cape_last90days_plots.sh I have fixed it and committed it. Please re-run this one job again. All other jobs are fine with correct output graphic files and without ERROR and WARNING messages in the log files.
Thanks!
Binbin
On Wed, Nov 20, 2024 at 8:14 AM Mallory Row @.***> wrote:
plots
Jobs submitted. COMOUT is /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/evs/v2.0/plots/mesoscale jevs_mesoscale_sref_cape_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cape_last90days_plots.o206948123 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cape_last90days_plots.206948123.dbqs01 jevs_mesoscale_sref_grid2obs_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_grid2obs_last90days_plots.o206948125 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_grid2obs_last90days_plots.206948125.dbqs01 jevs_mesoscale_sref_td2m_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_td2m_last90days_plots.o206948136 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_td2m_last90days_plots.206948136.dbqs01 jevs_mesoscale_sref_cloud_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cloud_last90days_plots.o206948168 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cloud_last90days_plots.206948168.dbqs01 jevs_mesoscale_sref_precip_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_precip_last90days_plots.o206948233 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_precip_last90days_plots.206948233.dbqs01 jevs_mesoscale_sref_cnv_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cnv_last90days_plots.o206948283 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cnv_last90days_plots.206948283.dbqs01 jevs_mesoscale_sref_precip_spatial_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_precip_spatial_plots.o206948290 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_precip_spatial_plots.206948290.dbqs01
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#issuecomment-2488551652, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFIA43XUUNKSORWNPVL2BSDJTAVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBYGU2TCNRVGI . You are receiving this because you were mentioned.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
Removed /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/evs/v2.0/plots/mesoscale/atmos.20241118/evs.plots.sref.cape.last90days.v20241118.tar and /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/evs/v2.0/plots/mesoscale/atmos.20241118/restart/90/sref_cape_plots
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cape_last90days_plots.o206950546 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cape_last90days_plots.206950546.dbqs01
The new run is good
Binbin
On Wed, Nov 20, 2024 at 8:34 AM Mallory Row @.***> wrote:
Removed /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/evs/v2.0/plots/mesoscale/atmos.20241118/evs.plots.sref.cape.last90days.v20241118.tar and /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/evs/v2.0/plots/mesoscale/atmos.20241118/restart/90/sref_cape_plots jevs_mesoscale_sref_cape_last90days_plots.sh
Log File: /lfs/h2/emc/vpppg/noscrub/mallory.row/verification/EVS_PRs/pr607/EVS/dev/drivers/scripts/plots/mesoscale/jevs_mesoscale_sref_cape_last90days_plots.o206950546 DATA: /lfs/h2/emc/stmp/mallory.row/evs_test/prod/tmp/jevs_mesoscale_sref_cape_last90days_plots.206950546.dbqs01
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#issuecomment-2488603171, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFMRIIOCOWD3ZLBREUD2BSFV7AVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBYGYYDGMJXGE . You are receiving this because you were mentioned.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
Done for both dev and ecf scripts
Binbin
On Wed, Nov 20, 2024 at 8:37 AM Mallory Row @.***> wrote:
@.**** commented on this pull request.
In ecf/scripts/plots/mesoscale/jevs_mesoscale_sref_grid2obs_last90days_plots.ecf https://github.com/NOAA-EMC/EVS/pull/607#discussion_r1850331311:
PBS -j oe
PBS -S /bin/bash
PBS -q %QUEUE%
PBS -A %PROJ%-%PROJENVIR%
-#PBS -l walltime=00:15:00 -#PBS -l place=vscatter:exclhost,select=3:ncpus=72:mem=300GB +#PBS -l walltime=00:10:00 +#PBS -l place=vscatter:exclhost,select=3:ncpus=72:mem=80GB
Change to mem=100GB to match dev driver.
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#pullrequestreview-2448643641, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFK35LXTLGIFT6TSXU32BSGBDAVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDINBYGY2DGNRUGE . You are receiving this because you were mentioned.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
Awesome! I think we are good here!
Thanks @BinbinZhou-NOAA! Make sure to update the Fixes and Additions document with what this PR fixes!
@malloryprow @BinbinZhou-NOAA I happened to be in the Fixes and Additions document, and crossed everything off so we are all set! Thanks again Binbin
Sure, thanks!
Binbin
On Wed, Nov 20, 2024 at 10:32 AM Mallory Row @.***> wrote:
Thanks @BinbinZhou-NOAA https://github.com/BinbinZhou-NOAA! Make sure to update the Fixes and Additions document with what this PR fixes!
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#issuecomment-2488900872, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFLUWQXW3BAVTTUJVUT2BSTPTAVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBYHEYDAOBXGI . You are receiving this because you were mentioned.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
Andrew,
I just found the Fixes and Additions document for the mesoscale-ens part has been uploaded. Thanks for updating it.
Binbin
On Wed, Nov 20, 2024 at 10:35 AM AndrewBenjamin-NOAA < @.***> wrote:
@malloryprow https://github.com/malloryprow @BinbinZhou-NOAA https://github.com/BinbinZhou-NOAA I happened to be in the Fixes and Additions document, and crossed everything off so we are all set! Thanks again Binbin
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/EVS/pull/607#issuecomment-2488910006, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQBMPFIEDDD4KFN65PXW32L2BST4PAVCNFSM6AAAAABSADWG2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBYHEYTAMBQGY . You are receiving this because you were mentioned.Message ID: @.***>
--
Binbin Zhou
Physical Scientist
Lynker at NOAA/NWS/NCEP/EMC
5830 University Research Ct.
College Park, MD 20740
@.***
301-683-3683
Note to developers: You must use this PR template!
Description of Changes
This PR is to fix several issues listed in the EVS/v2 Additions and Bugfixes document, and some other issues that do not listed in this document but are general requirements for EVS. The fixed issues include:
Following 8~11 are general requirements for EVS components:
Developer Questions and Checklist
Is this a high priority PR? If so, why and is there a date it needs to be merged by? Yes, since all of the issues should be fixed before EVS/v2 implementation deadline
Do you have any planned upcoming annual leave/PTO? Yes, 11/19 and 11/22 will leave
Are there any changes needed for when the jobs are supposed to run? No
[y ] The code changes follow NCO's EE2 Standards.
[y ] Developer's name is removed throughout the code and have used
${USER}
where necessary throughout the code.[y ] References the feature branch for
HOMEevs
are removed from the code.[ y] J-Job environment variables, COMIN and COMOUT directories, and output follow what has been defined for EVS.
[y ] Jobs over 15 minutes in runtime have restart capability.
[y ] If applicable, changes in the
dev/drivers/scripts
ordev/modulefiles
have been made in the correspondingecf/scripts
andecf/defs/evs-nco.def
?[y ] Jobs contain the appropriate file checking and don't run METplus for any missing data.
[y ] Code is using METplus wrappers structure and not calling MET executables directly.
[ y] Log is free of any ERRORs or WARNINGs.
Testing Instructions
Part 1. For the stats generation jobs:
There are 2 independent stat generation jobs: jevs_mesoscale_sref_grid2obs_stats.sh jevs_mesoscale_sref_precip_stats.sh
Note: Before running mesoscale_sref_precip_stats.sh, please compile to create exec/sref_precip.x or link the EXECevs to /lfs/h2/emc/vpppg/noscrub/emc.vpppg/EVS/exec in the dev script: export EXECevs=/lfs/h2/emc/vpppg/noscrub/emc.vpppg/EVS/exec
Part 2. For the plots generation jobs:
There are 7 independent plot generation jobs: jevs_mesoscale_sref_cape_last90days_plots.sh
jevs_mesoscale_sref_grid2obs_last90days_plots.sh
jevs_mesoscale_sref_td2m_last90days_plots.sh jevs_mesoscale_sref_cloud_last90days_plots.sh
jevs_mesoscale_sref_precip_last90days_plots.sh jevs_mesoscale_sref_cnv_last90days_plots.sh
jevs_mesoscale_sref_precip_spatial_plots.sh
Note: In all of the plotting jobs, the COMIN should be set to /lfs/h2/emc/vpppg/noscrub/emc.vpppg/evs/v2.0