ufs-community / ufs-mrweather-app

UFS Medium-Range Weather Application
Other
23 stars 23 forks source link

8 day forecast starting 2019-08-29 #86

Closed jedwards4b closed 4 years ago

jedwards4b commented 4 years ago

Here is the complete output: /glade/scratch/jedwards/ufstest/run I don't have enough experience with this model to know if it's any good or not - @arunchawla-NOAA who should look at it?

arunchawla-NOAA commented 4 years ago

First congratulations in getting that done. If rocky can help move to Hera we will see how best to visualize it then share that script. Hopefully that helps us set the baseline. How long did it take you to run it ?

Sent from my iPhone

On Feb 12, 2020, at 7:06 PM, jedwards4b notifications@github.com wrote:

 Here is the complete output: /glade/scratch/jedwards/ufstest/run I don't have enough experience with this model to know if it's any good or not - @arunchawla-NOAA who should look at it?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

jedwards4b commented 4 years ago

It only took about 30 minutes. @rsdunlapiv can you copy this directory to hera?

arunchawla-NOAA commented 4 years ago

@jedwards4b given the NSST setting issue, maybe we need to rerun this case. But first lets ensure that the regression tests pass

rsdunlapiv commented 4 years ago

I started the copy to Hera: /scratch1/NCEPDEV/nems/Rocky.Dunlap/ufs8day.tar.gz It looks like it will take another hour to complete the transfer.

pjpegion commented 4 years ago

I'm trying to run the dorian test case on cheyenne with the gnu compliler. and I'm running into issues. First of all, the workflow is looking for data in /glade/p/cesmdata/cseg/ufs_inputdata/icfiles/gfsanl/gfs.20190829/00 but the data is really in /glade/p/cesmdata/cseg/ufs_inputdata/icfiles/gfsanl/201908/20190829/

When I changed the path in src/model/FV3/cime/cime_config/buildnml I now get the following error when running case.submit (or ./check_input_data):

Loading input file list: 'Buildconf/ufsatm.input_data_list' Traceback (most recent call last): File "./check_input_data", line 76, in _main_func(doc) File "./check_input_data", line 71, in _main_func chksum=chksum) else 1) File "/glade/work/pegion/ufs-mrweather-app/cime/scripts/Tools/../../scripts/lib/CIME/case/check_input_data.py", line 166, in check_all_input_data input_data_root=input_data_root, data_list_dir=data_list_dir, chksum=chksum and chksum_found) File "/glade/work/pegion/ufs-mrweather-app/cime/scripts/Tools/../../scripts/lib/CIME/case/check_input_data.py", line 321, in check_input_data if iput_ic_root and input_ic_root in full_path \ NameError: global name 'iput_ic_root' is not defined

rsdunlapiv commented 4 years ago

@arunchawla-NOAA the transfer has completed: /scratch1/NCEPDEV/nems/Rocky.Dunlap/ufs8day.tar.gz

uturuncoglu commented 4 years ago

@pjpegion this is already fixed. there was typo problem in the file. Could you update the CIME to the latest of remotes/origin/ufs_release_v1.0 branch. This will fix the problem.

jedwards4b commented 4 years ago

@pjpegion we are in the process of debugging this case now, we are not ready for you to run it unless you want to help figure out the problem. Thanks,

jedwards4b commented 4 years ago

Continued from email discussion. I am now using the data from the file gfs_4_20190829_0000_000.grb2 I have made the following changes from the default namelist:

nfhmax_hf=6
nfhout=6
nfhout_hf=6
nstf_name = 0,0,0,0,0
convert_nst = .false.

And the model is blowing up

15:MPT: #1  0x00002aaf778fcdb6 in mpi_sgi_system (
15:MPT: #2  MPI_SGI_stacktraceback (
15:MPT:     header=header@entry=0x7ffdd0d80a40 "MPT ERROR: Rank 15(g:15) received signal SIGSEGV(11).\n\tProcess ID: 42759, Host: r6i0n4, Program: /glade/scratch/jedwards/ufstest/bld/ufs.exe\n\tMPT Version: HPE MPT 2.19  02/23/19 05:30:09\n")
15:MPT:     at sig.c:340
15:MPT: #3  0x00002aaf778fcfb2 in first_arriver_handler (signo=signo@entry=11, 
15:MPT:     stack_trace_sem=stack_trace_sem@entry=0x2aaf852c0080) at sig.c:489
15:MPT: #4  0x00002aaf778fd34b in slave_sig_handler (signo=11, 
15:MPT:     siginfo=<optimized out>, extra=<optimized out>) at sig.c:564
15:MPT: #5  <signal handler called>
15:MPT: #6  sfc_nst_mp_sfc_nst_run_ ()
15:MPT:     at /glade/scratch/jedwards/ufstest/bld/atm/obj/FV3/ccpp/physics/physics/sfc_nst.f:365
15:MPT: #7  0x0000000000b3bce4 in ccpp_fv3_gfs_v15p2_physics_cap_mp_fv3_gfs_v15p2_physics_run_cap_ ()
15:MPT:     at /glade/scratch/jedwards/ufstest/bld/atm/obj/FV3/ccpp/physics/ccpp_FV3_GFS_v15p2_physics_cap.F90:592
15:MPT: #8  0x0000000000b07fda in ccpp_static_api_mp_ccpp_physics_run_ ()
15:MPT:     at /glade/scratch/jedwards/ufstest/bld/atm/obj/FV3/ccpp/physics/ccpp_static_api.F90:150
15:MPT: #9  0x0000000000b09976 in ccpp_driver_mp_ccpp_step_ ()
15:MPT:     at /glade/scratch/jedwards/ufstest/bld/atm/obj/FV3/ccpp/driver/CCPP_driver.F90:234
15:MPT: #10 0x00000000004c02ec in atmos_model_mod_mp_update_atmos_radiation_physics_ ()
15:MPT:     at /glade/scratch/jedwards/ufstest/bld/atm/obj/FV3/atmos_model.F90:364
15:MPT: #11 0x00000000004b6ab3 in module_fcst_grid_comp_mp_fcst_run_phase_1_ ()
15:MPT:     at /glade/scratch/jedwards/ufstest/bld/atm/obj/FV3/module_fcst_grid_comp.F90:708
15:MPT: #12 0x00002aaf72d0b509 in ESMCI::FTable::callVFuncPtr(char const*, ESMCI::VM*, int*) ()
15:MPT:    from /glade/p/ral/jntp/GMTB/tools/NCEPLIBS-ufs-v1.0.0.alpha01/intel-18.0.5/mpt-2.19/lib64/libesmf.so
15:MPT: #13 0x00002aaf72d0f0db in ESMCI_FTableCallEntryPointVMHop ()
jedwards4b commented 4 years ago

I tried nstf_name = 0,1,1,0,5 - this also crashes in the same way. I went back to nstf = 2,1,1,0,5 and the model runs to completion.

jedwards4b commented 4 years ago

@arunchawla-NOAA Can we get the post chgres_cube files for this case and try running from that so that we can determine if the problem is there?

jedwards4b commented 4 years ago

@BinLiu-NOAA What do you suggest next.

BinLiu-NOAA commented 4 years ago

@jedwards4b I am not sure why in your test, nstf_name = 0,1,1,0,5 or nstf_name = 0,0,0,0,0 could not work. But, if nstf = 2,1,1,0,5 worked, you probably want to double check if the NSST related output fields are correct. I would suggest you checking with @GeorgeGayno-NOAA and xu.li@noaa.gov at EMC to see if chgres_cube and the NSST component of ufs-weather-model are working properly.

Also, to be related, did you also ran a test by using NEMSIO format GFS file? If so, that simulation can serve as a kind of control experiment for the one initialized by using grib2 format GFS files.

Bin

uturuncoglu commented 4 years ago

@BinLiu-NOAA We used grib2 input in this case. We don't have NEMSIO files for same date (dorian case) because the NOMADS server has only last 10 days. If you know the public place that we could get NEMSIO files and use them to run the model, we could try. If you or @GeorgeGayno-NOAA have recent run (dorian case) that uses chores, grib2 combination and works with this options, please share the namelist files and also input files, then we could make some comparison. Otherwise, it could be hard the find the source of the problem for us without deep experience with NSST, model and chgres.

BinLiu-NOAA commented 4 years ago

@jedwards4b For your reference, I posted below for the namelist files for my chgres_cube and forecast job for my C96 grib2 test. They are on Hera, together with the chgres_cube and forecast results, if you happen to have access there. Meanwhile, just for clarification, I was using early versions of UFS_UTILS's chgres and the ufs-weather-model (more specifically the HAFS application). But, I don't think it makes much difference here.

Bin

more /scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/HAFS_uniform_chgres_g2new_C96_2019082900_05L.work/fort.41 &config mosaic_file_target_grid="/scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/chgres_driver/../HAFS_uniform_grid_C96/C96/C96_mosaic.nc" fix_dir_target_grid="/scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/chgres_driver/../HAFS_uniform_grid_C96/C96" orog_dir_target_grid="/scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/chgres_driver/../HAFS_uniform_grid_C96/C96" orog_files_target_grid="C96_oro_data.tile1.nc","C96_oro_data.tile2.nc","C96_oro_data.tile3.nc","C96_oro_data.tile4.nc","C96_oro_data.tile5.nc","C96_oro_data.tile6.nc" vcoord_file_target_grid="/scratch1/NCEPDEV/hwrf/save/Bin.Liu/hafs_201910/fix/fix_am/global_hyblev.l65.txt" mosaic_file_input_grid="NULL" orog_dir_input_grid="NULL" orog_files_input_grid="NULL" data_dir_input_grid="./" atm_files_input_grid="gfs.t00z.pgrb2.1p00.f000" sfc_files_input_grid="gfs.t00z.pgrb2.1p00.f000" grib2_file_input_grid="gfs.t00z.pgrb2.1p00.f000" varmap_file="/scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/chgres_driver/../hafs_utils.fd/parm/varmap_tables/FV3GFSphys_var_map.txt" cycle_mon=08 cycle_day=29 cycle_hour=00 convert_atm=.true. convert_sfc=.true. convert_nst=.false. input_type="grib2" tracers="sphum","liq_wat","o3mr" tracers_input="spfh","clwmr","o3mr" regional=0 halo_bndy=0 /

more /scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/HAFS_uniform_forecast_g2new_C96_2019082900_05L/input.nml &amip_interp_nml interp_oi_sst = .true. use_ncep_sst = .true. use_ncep_ice = .false. no_anom_sst = .false. data_set = 'reynolds_oi' date_out_of_range = 'climo' /

&atmos_model_nml blocksize = 32 chksum_debug = .false. dycore_only = .false. fdiag = 3 avg_max_length = 3600. fhmax = 240 fhout = 3 fhmaxhf = 0 fhouthf = 3 /

&diag_manager_nml prepend_date = .false. /

&fms_io_nml checksum_required = .false. max_files_r = 100, max_files_w = 100, /

&fms_nml clock_grain = 'ROUTINE', domains_stack_size = 120000000, print_memory_usage = .false. /

&fv_grid_nml !grid_file = 'INPUT/grid_spec.nc' /

&fv_core_nml !layout = 12,12 !layout = 8,8 layout = 8,8 io_layout = 1,1 npx = 97 npy = 97 ntiles = 6 npz = 64 !grid_type = -1 make_nh = .F. fv_debug = .F. range_warn = .T. reset_eta = .F. n_sponge = 10 nudge_qv = .T. nudge_dz = .F. tau = 10. rf_cutoff = 7.5e2 d2_bg_k1 = 0.15 d2_bg_k2 = 0.02 kord_tm = -9 kord_mt = 9 kord_wz = 9 kord_tr = 9 hydrostatic = .F. phys_hydrostatic = .F. use_hydro_pressure = .F. beta = 0. a_imp = 1. p_fac = 0.1 k_split = 2 n_split = 6 nwat = 6 na_init = 1 d_ext = 0.0 dnats = 1 fv_sg_adj = 450 d2_bg = 0. nord = 2 dddmp = 0.1 d4_bg = 0.12 vtdm4 = 0.02 delt_max = 0.002 ke_bg = 0. do_vort_damp = .T. external_ic = .T. external_eta = .T. gfs_phil = .false. nggps_ic = .T. mountain = .F. ncep_ic = .F. d_con = 1.0 hord_mt = 5 hord_vt = 5 hord_tm = 5 hord_dp = -5 hord_tr = 8 adjust_dry_mass = .F. consv_te = 1. do_sat_adj = .T. consv_am = .F. fill = .T. dwind_2d = .F. print_freq = 3 warm_start = .F. no_dycore = .false. z_tracer = .T. agrid_vel_rst = .true. read_increment = .F. res_latlon_dynamics = "fv3_increment.nc" write_3d_diags = .true. /

&surf_map_nml zero_ocean = .F. cd4 = 0.15 cd2 = -1 n_del2_strong = 0 n_del2_weak = 15 n_del4 = 2 max_slope = 0.4 peak_fac = 1. /

&external_ic_nml filtered_terrain = .true. levp = 65 gfs_dwinds = .true. checker_tr = .F. nt_checker = 0 /

&gfs_physics_nml fhzero = 3. ldiag3d = .false. lradar = .true. avg_max_length = 3600. h2o_phys = .true. fhcyc = 24. use_ufo = .true. pre_rad = .false. ncld = 5 imp_physics = 11 pdfcld = .false. fhswr = 3600. fhlwr = 3600. ialb = 1 iems = 1 iaer = 111 ico2 = 2 isubc_sw = 2 isubc_lw = 2 isol = 2 lwhtr = .true. swhtr = .true. cnvgwd = .true. shal_cnv = .true. !Shallow convection cal_pre = .false. redrag = .true. dspheat = .true. hybedmf = .true. moninq_fac = -1.0 satmedmf = .false. random_clds = .false. trans_trac = .true. cnvcld = .true. imfshalcnv = 2 imfdeepcnv = 2 cdmbgwd = 3.5, 0.25 sfc_z0_type = 6 prslrd0 = 0. ivegsrc = 1 isot = 1 debug = .false. nst_anl = .true. nstf_name = 0,0,0,0,0 psautco = 0.0008, 0.0005 prautco = 0.00015, 0.00015 iau_delthrs = 6 iaufhrs = 30 iau_inc_files = '' do_deep = .true. lgfdlmprad = .true. effr_in = .true. /

&gfdl_cloud_microphysics_nml sedi_transport = .true. do_sedi_heat = .false. rad_snow = .true. rad_graupel = .true. rad_rain = .true. const_vi = .F. const_vs = .F. const_vg = .F. const_vr = .F. vi_max = 1. vs_max = 2. vg_max = 12. vr_max = 12. qi_lim = 1. prog_ccn = .false. do_qa = .true. fast_sat_adj = .true. tau_l2v = 225. tau_v2l = 150. tau_g2v = 900. rthresh = 10.e-6 ! This is a key parameter for cloud water dw_land = 0.16 dw_ocean = 0.10 ql_gen = 1.0e-3 ql_mlt = 1.0e-3 qi0_crt = 8.0E-5 qs0_crt = 1.0e-3 tau_i2s = 1000. c_psaci = 0.05 c_pgacs = 0.01 rh_inc = 0.30 rh_inr = 0.30 rh_ins = 0.30 ccn_l = 300. ccn_o = 100. c_paut = 0.5 c_cracw = 0.8 use_ppm = .false. use_ccn = .true. mono_prof = .true. z_slope_liq = .true. z_slope_ice = .true. de_ice = .false. fix_negative = .true. icloud_f = 1 mp_time = 150. /

&interpolator_nml interp_method = 'conserve_great_circle' /

&namsfc FNGLAC = "global_glacier.2x2.grb", FNMXIC = "global_maxice.2x2.grb", FNTSFC = "RTGSST.1982.2012.monthly.clim.grb", FNSNOC = "global_snoclim.1.875.grb", FNZORC = "igbp" !FNZORC = "global_zorclim.1x1.grb", FNALBC = "global_snowfree_albedo.bosu.t1534.3072.1536.rg.grb", FNALBC2 = "global_albedo4.1x1.grb", FNAISC = "CFSR.SEAICE.1982.2012.monthly.clim.grb", FNTG3C = "global_tg3clim.2.6x1.5.grb", FNVEGC = "global_vegfrac.0.144.decpercent.grb", FNVETC = "global_vegtype.igbp.t1534.3072.1536.rg.grb", FNSOTC = "global_soiltype.statsgo.t1534.3072.1536.rg.grb", FNSMCC = "global_soilmgldas.t1534.3072.1536.grb", FNMSKH = "seaice_newland.grb", FNTSFA = "", FNACNA = "", FNSNOA = "", FNVMNC = "global_shdmin.0.144x0.144.grb", FNVMXC = "global_shdmax.0.144x0.144.grb", FNSLPC = "global_slope.1x1.grb", FNABSC = "global_mxsnoalb.uariz.t1534.3072.1536.rg.grb", LDEBUG =.true., FSMCL(2) = 99999 FSMCL(3) = 99999 FSMCL(4) = 99999 FTSFS = 90 FAISS = 99999 FSNOL = 99999 FSICL = 99999 FTSFL = 99999 FAISL = 99999 FVETL = 99999, FSOTL = 99999, FvmnL = 99999, FvmxL = 99999, FSLPL = 99999, FABSL = 99999, FSNOS = 99999, FSICS = 99999, / &nam_stochy / &nam_sfcperts /

uturuncoglu commented 4 years ago

@BinLiu-NOAA Thanks for the namelist files. I'll check them. In the meantime, i think it is better to test with the version that we are using because there could be an issue related with CHGRES or the model stability.

It could be easy test to use CHGRES from 1.0.0alpha01 and try to generate input files and run the model.

uturuncoglu commented 4 years ago

BTW, your CHGRES namelist has lots of options and i think most of them are in default value. Right? Anyway, we could try to use your namelist to see what happens.

uturuncoglu commented 4 years ago

Is this process grib2? How we could access the input data? Is it in a public place? You are also setting tracers and tracers_input and I think those are not required for grib2 input.

pjpegion commented 4 years ago

@rsdunlapiv Thanks, it works now @jedwards4b My run with the gnu compiler is complete. I will compare my results with yours.

pjpegion commented 4 years ago

@jedwards4b my run was with the gfs analysis files not the new "canned winds" file that Kate Friedman supplied. Do you still have the results of your run that started this thread?

BinLiu-NOAA commented 4 years ago

BTW, your CHGRES namelist has lots of options and i think most of them are in default value. Right? Anyway, we could try to use your namelist to see what happens.

@uturuncoglu, please use your own versions of namelist files. I posted mine just for your reference (because you asked). Again, as I mentioned, my tests were based on early versions of UFS_UTILS's chgres_cube and ufs-weather-model.

jedwards4b commented 4 years ago

@rsdunlapiv Can you tar the directory /scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/HAFS_uniform_forecast_g2new_C96_2019082900_05L/ on hera and transfer to cheyenne:

To confirm please make sure the file gfs.t00z.pgrb2.1p00.f000 is there.

We also need /scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/HAFS_uniform_chgres_g2new_C96_2019082900_05L.work/INPUT

Thanks

uturuncoglu commented 4 years ago

@BinLiu-NOAA JFYI, i compared your input.nml with CIME generated one and I found following differences,

domains_stack_size 120000000 (3000000)
iau_delthrs 6 (3)
iaufhrs 30 (-1)
lradar .true. (.false.)
moninq_fac -1.0 (1.0)
sfc_z0_type 6 (0)
max_slope 0.4 (0.15)
n_del2_weak 15 (12)
n_del4 2 (1)
zero_ocean .false. (.true.)

In this case the values indicated by () are the CIME ones. So, we have lots of differences. We were using following documents to set defaults for mr-wearther model

v16beta - https://docs.google.com/document/d/1bLbVdWgEIknDQZgTuOZ6IPVEGv5jUgOrCm4GrR96oBU/edit v15p2 - https://docs.google.com/document/u/1/d/1EKc2mAld5VsrNjTRgqUcTVG1ZcEIkllA-NrAKUs4DWI/edit

The above list of options even do not exist in the google docs and we are using defaults from the source code. This indicates that there is no common namelist file in EMC side. This makes hard to find the source of the problem. I think we need to use same

Besides these, we also tested those options and the model still failing in the same place. We are still investigating the source of the problem.

rsdunlapiv commented 4 years ago

@BinLiu-NOAA is this a configuration that is only expected to work for the HAFS (hurricane) configuration? The focus of the release is global medium-range weather, so if HAFS has a lot of application specific changes, it may not be suitable for the release at this time.

arunchawla-NOAA commented 4 years ago

@junwang-noaa and @KateFriedman-NOAA can you help here. The namelists that @jedwards4b has based on the documentation provided is different from the one that @BinLiu-NOAA is running.

Can we identifty what those differences signify? We want to understand why the runs are blowing up

jedwards4b commented 4 years ago

The problem seems to be strongly linked to turning OFF NSST.

arunchawla-NOAA commented 4 years ago

@jedwards4b just for clarification, do the runs work when you use the namelists from @BinLiu-NOAA ?

jedwards4b commented 4 years ago

The namelists from @BinLiu-NOAA are from a different version of both the model and the chgres there appear to be namelist variables defined that we don't have and so we can't just copy the namelist. We tried to pick out all the differences that we could find and run with that - the model died in exactly the same way.

arunchawla-NOAA commented 4 years ago

OK we will get together tomorrow at EMC and get back to you

jedwards4b commented 4 years ago

This file gfs.t00z.pgrb2.1p00.f000 is the input used by the @BinLiu-NOAA case. It doesn't have the same name as the file on the ftp site gfs_4_20190829_0000_000.grb2. We haven't been able to find the file on hera to confirm whether it is or is not the same file.

arunchawla-NOAA commented 4 years ago

Jim

That was an older run that Bin did. He is repeating the run(s) with the file that Kate has posted on the ftp site. chgres has worked he is turning to the model run now.

Arun Chawla Chief Engineering & Implementation Branch Room 2083 National Center for Weather & Climate Prediction 5830 University Research Court College Park, MD 20740 Phone : 301-683-3740 Mobile : 240-564-5675 Fax : 301-683-3703

On Fri, Feb 14, 2020 at 9:01 AM jedwards4b notifications@github.com wrote:

This file gfs.t00z.pgrb2.1p00.f000 is the input used by the @BinLiu-NOAA https://github.com/BinLiu-NOAA case. It doesn't have the same name as the file on the ftp site gfs_4_20190829_0000_000.grb2. We haven't been able to find the file on hera to confirm whether it is or is not the same file.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ufs-community/ufs-mrweather-app/issues/86?email_source=notifications&email_token=AL5NYIZA7BDTG5FA6OVJUFDRC2P3VA5CNFSM4KUG5EQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELZDZKI#issuecomment-586300585, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL5NYI34QHOLYVMXFBHCXJTRC2P3VANCNFSM4KUG5EQQ .

jedwards4b commented 4 years ago

@BinLiu-NOAA @arunchawla-NOAA I've been looking at this with a debugger - the variable qrain in routine sfc_nct_run is passed in from variable GFS_Data()%Sfcprop%qrain which does not appear to be associated or allocated. It's the assignment to this variable that is failing.

jedwards4b commented 4 years ago

And this is the problem: in GFS_typedefs.F90 the allocation of qrain happens within a if (Model%nstf_name(1) > 0) then conditional.

climbfuji commented 4 years ago

Is qrain going to be used at all in sfc_nst_run? If it is not, then one fix would be to remove the explicit dimensions from the variable declaration (i.e. the intent statement) and make it an assumed-size array (i.e. use ":" instead of "im").

But this isn't a great way to do this. If sfc_nst is not used (because nstf_name(1)==0) then the calls to it shouldn't be in the suite definition file. Makes no sense, or? Say in input.nml that you don't want to use sfc_nst, but tell the CCPP to run it by keepin it in the CCPP suite definition file.

An alternative solution is to simply allocate the qrain array in GFS_typedefs.F90 no matter if nstf_name(1) is larger than zero or not.

Can someone explain please why we can't use sfc_nst with the grib2 data? I am not a fan of removing the sfc_nst calls from the suite defintion file or making such big changes like switching an entire set of physics on or off so close to the release. But that is not my decision to make ...

jedwards4b commented 4 years ago

I took the alternative approach and commented out the if block in GFS_typedefs.F90 - it looks like I'll be able to complete the 8 day forecast with that change.

arunchawla-NOAA commented 4 years ago

@climbfuji we had switched nsst option off because that information was not being created by grib2 files. However, that is creating more problems. We are taking a close look to see how the results look. If they are fine then we will leave the NSST option on. We will have some internal discussions and get back to you

jedwards4b commented 4 years ago

So you should already have my results with nsst on - on hera in file /scratch1/NCEPDEV/nems/Rocky.Dunlap/ufs8day.tar.gz

junwang-noaa commented 4 years ago

@climbfuji I think CCPP needs to provide a different suite file for not running nst. The GFS_typedefs is correct, if no nstf_name(1)=0, then nst is not called, and all the nst related fields should not be allocated.

junwang-noaa commented 4 years ago

qrain is a variable used by sfc_nst

arunchawla-NOAA commented 4 years ago

Even though no nsst data is available in cold start, some nsst like functionality is happening based on results that @GeorgeGayno-NOAA showed us. This does not work with IPD, but for some reason with CCPP it is working. We need to discuss with @climbfuji

jedwards4b commented 4 years ago

I have also completed 8 days with nstf_name(1)=0 but with the commented out the if block in GFS_typedefs.F90. Those results are on cheyenne in /glade/scratch/jedwards/SMS_Ld8.C96.GFSv15p2.cheyenne_intel.20200214_110758_es2nmt/run

arunchawla-NOAA commented 4 years ago

So here is the summary of a long discussion between EMC, NCAR and DTC

  1. The flags that CIME team is using are correct with a few minor tweaks a) When using NEMSIO use nstf_name = 2,0,0,0,0 as this will use NSST fields from the data stored in NEMSIO b) When using grib2 use nstf_name = 2,1,0,0,0 as this will spin up NSST

  2. If we do not want to use NSST then we have to set nstf_name=0,0,0,0,0 but also change the CCPP suite definition file [the confusion at EMC was that we only had to do the former in IPD]

  3. We are still determining if the appropriate thing with GRIB2 data is to turn NSST off (hence change the suite definition) or let it spin up, knowing the results will not be completely accurate. We will do a couple of runs

Bottom line is CIME has the right flags we just need to decide using 2,0,0,0,0 for NEMSIO (Jim will do a test to see how this works)

All other differences in namelists were a red herring as we were testing with different codes and should be ignored

We still have a decision to make for NSST with grib2 data and there is a related ticket for that

https://github.com/ufs-community/ufs-mrweather-app/issues/87

jedwards4b commented 4 years ago

I have created PRs https://github.com/NOAA-EMC/fv3atm/pull/67 and https://github.com/ESCOMP/FV3GFS_interface/pull/5 to resolve this issue given that the 8 day output is acceptable.

junwang-noaa commented 4 years ago

@jedwards4b Why do we need PRs NOAA-EMC/fv3atm#67? I thought we are resolving the issue using method 1) in Arun's email.

jedwards4b commented 4 years ago

I offer PR #67 as an alternative or perhaps additional choice. I think that we'll make a decision based on the output of the two runs.

On Sat, Feb 15, 2020 at 8:59 PM junwang-noaa notifications@github.com wrote:

@jedwards4b https://github.com/jedwards4b Why do we need PRs NOAA-EMC/fv3atm#67 https://github.com/NOAA-EMC/fv3atm/pull/67? I thought we are resolving the issue using method 1) in Arun's email.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ufs-community/ufs-mrweather-app/issues/86?email_source=notifications&email_token=ABOXUGGYXC3IYS4GMRLUEKDRDC23PA5CNFSM4KUG5EQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEL35BZI#issuecomment-586666213, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABOXUGGUY73G5QZGKWGR72LRDC23PANCNFSM4KUG5EQQ .

-- Jim Edwards

CESM Software Engineer National Center for Atmospheric Research Boulder, CO

jedwards4b commented 4 years ago

@arunchawla-NOAA You were going to provide an ncl script to verify the dorian test case.

arunchawla-NOAA commented 4 years ago

we have sample ncl scripts for review. We want to check in other systems

ceceliadid commented 4 years ago

@jedwards4b Which directories/files do you need to get from https://ftp.emc.ncep.noaa.gov/EIB/UFS/ as input files for the Dorian case?

jedwards4b commented 4 years ago

@ceceliadid just this one: https://ftp.emc.ncep.noaa.gov/EIB/UFS/inputdata/201908/20190829/gfs_4_20190829_0000_000.grb2

If you want the complete list of fix files that's a lot more complicated to provide.

ceceliadid commented 4 years ago

@jedwards4b Great, thanks. Yup, I need the fix files too to set up the dorian example on Stampede. It would be really nice for the non-pre-configured platforms to have another tarball in the ftp directory like the simple-test-case. Are you using the same fix files that are in that simple-test-case directory? Or pulling them out of global/fix?