Closed ailishgraham closed 3 years ago
Segmentation faults can often be from exceeding memory limits. Assuming this is being run within pre.bash
, and not manually, can you try increasing the memory (e.g., #$ -l h_vmem=128G
)?
Hi Luke,
I also tried this - sorry I should have said. I increased to 128 Gb as you suggested and still got the same error. I guess it isn't possible to run pre.bash over multiple nodes? I think I saw the memory limit is 192Gb for the nodes we use, are there any larger ones we can request? If not, perhaps a solution would be to run anthro_emis multiple times in chunks and (if this solves the issue) add in some python code to merge the netCDF files created?
Okay. Did 192 GB fail too? The preprocessors are compiled from serial code, so these would need to be rewritten to run in parallel. I suppose you could run anthro_emis
over subsets of species instead of them all together and then add them back to the same wrfchemi
files, though this is a bit of hack.
ARC4 does have high-memory nodes up to 768GB. To use these you add #$ -l node_type=40core-768G
to the preamble at the top of pre.bash
, and then increase the memory using #$ -l h_vmem=...G
.
Though 192 GB is already a lot of memory. If you request the job output to be emailed to you (as below), you can see if the job did fail from requesting too much memory.
-m be
-M email@leeds.ac.uk
If it doesn't work with 768GB of memory then the problem is something else haha.
I upped the memory to 256 Gb on the high-memory nodes but if I print out the job info I can see it used 14.7 Gb max memory so this is not why it was crashing before. I then altered the order things were being read in (i.e. the order the species are read in matches the src_names exactly). This seems to have got things working. I hadn't realised these needed to match (or is this just a coincidence?).
qname 40core-768G.q
hostname d8mem1.arc4.leeds.ac.uk
group EAR
owner ee15amg
project ENG
department defaultdepartment
jobname pre.bash
jobnumber 2798500
taskid undefined
account sge
priority 0
qsub_time Fri Aug 27 17:44:29 2021
start_time Fri Aug 27 17:45:14 2021
end_time Fri Aug 27 17:51:42 2021
granted_pe ib-edr-part-2
slots 1
failed 0
exit_status 0
ru_wallclock 388s
ru_utime 265.451s
ru_stime 71.413s
ru_maxrss 14.074MB
ru_ixrss 0.000B
ru_ismrss 0.000B
ru_idrss 0.000B
ru_isrss 0.000B
ru_minflt 2173666
ru_majflt 12
ru_nswap 0
ru_inblock 48327584
ru_oublock 74783728
ru_msgsnd 0
ru_msgrcv 0
ru_nsignals 0
ru_nvcsw 25518
ru_nivcsw 1510
cpu 336.864s
mem 730.063GBs
io 216.952GB
iow 0.000s
maxvmem 14.073GB
arid undefined
ar_sub_time undefined
category -U tomcat -l env=centos7,h_rt=10800,h_vmem=256G,node_type=40core-768G,project=arc -pe ib-edr-part-* 1
Okay. That sounds like it wasn't a memory issue.
I'm not sure I follow your solution. The list of species in src_names
is the order of processing. What exactly did you change in the anthro_emis
input namelist?
I've been testing what fixed it by reverting the changes in the anthro_emis.inp file one by one but have yet to find what breaks it again. To fix it: -I first tested just reading in totals for each species (i.e. emis_tot) - adding one species at a time. -Once that worked I read in all sectors except awb and total_no_awb ('emis_awb', 'emis_no_awb') - again adding in one species at a time. -Then added in awb and total_no_awb last. -I did all of those steps with high memory and then reduced the memory to see if it still worked. -For all of those above steps I kept the mapping for each species (emis_map) in the same order as the species list (src_names) (i.e. src_names = 'CO(28)', 'NOx(30)', 'SO2(64)'.... emis_map = 'CO->CO(emis_tot)','NO->0.8NOx(emis_tot)','NO2(emis_tot)->0.2NOx(emis_tot)','SO2->SO2(emis_tot)'....). I have since tested not matching the order of src_names and emis_map and this still works.
Looking through the anthro_emis source code the issue was after area_mapper had finished for the final species in the list and the wrfchemi files had been created. This would suggest it was within the 'cleanup for next domain' step (in anthro_emis.f90 file) as this needs to complete before the 'anthro_emis succesful' message is printed.
Okay. Well, maybe it was a temporary hardware glitch. I'm not sure how much value there is in persisting with replicating the old bug now that things are back working with the original settings. In summary, it sounds like there is nothing to change in the general settings and we can close this issue now.
Yes I agree, thanks for the help.
Running anthro_emis (/nobackup/WRFChem/anthro_emis version) with new emissions causes segmentation error (see error below). Emissions netcdf files follow the same format as EDGAR-HTAP2 (but includes extra sectors 'emis_tot_no_awb'). Segmentation fault occurs when reading in individual sectors (of which there are 14) and only the total (i.e. 1 sector). The fault seems to occur once 12 files have been read in (each file is 3.5 Gb in size). The error always occurs after reading the last file in (so if more than 12 files are read in (e.g. 15) it will occur on the final file (file 15)). The segmentation fault prevents the final statement of 'anthro_emis complete' being printed. However, the wrfchemi_00z and wrfchemi_12z files are generated and look reasonable.
I have tried following online help on the GEOS-Chem website (http://wiki.seas.harvard.edu/geos-chem/index.php/Segmentation_faults) to find the route of the error. The GEOS-Chem website suggests the error arises from either:
error:
will use source file for C2H6
get_src_time_ndx; src_dir,src_fn = /nobackup/ee15amg/wrf3.7.1_data/emissions/EDGARv52015_CAMS2016_MEIC2017/EDGARv5 _2015_CAMS_v4.2_2016_MEIC_v1.3_2017_Malley_C2H6_monthly_0.1x0.1.nc get_src_time_ndx; interp_date,datesec,ntimes = 20170904 0 12 get_src_time_ndx; tndx = 9 aera_interp: raw dataset max value = 1.7067602E-08 aera_interp: raw dataset max indices = 2188 991 aera_interp: raw dataset max value = 7.5946782E-10 aera_interp: raw dataset max indices = 2048 1322 aera_interp: raw dataset max value = 3.3164188E-10 aera_interp: raw dataset max indices = 2314 1258 aera_interp: raw dataset max value = 1.1780937E-10 aera_interp: raw dataset max indices = 2178 1460 aera_interp: raw dataset max value = 8.0596892E-12 aera_interp: raw dataset max indices = 1111 1339 aera_interp: raw dataset max value = 0.0000000E+00 aera_interp: raw dataset max indices = 1 1 aera_interp: raw dataset max value = 1.3037157E-13 aera_interp: raw dataset max indices = 2854 851 aera_interp: raw dataset max value = 1.7423774E-08 aera_interp: raw dataset max indices = 2188 991 aera_interp: raw dataset max value = 1.7423773E-08 aera_interp: raw dataset max indices = 2188 991 aera_interp: raw dataset max value = 4.7226974E-13 aera_interp: raw dataset max indices = 1796 1415 aera_interp: raw dataset max value = 4.8074793E-13 aera_interp: raw dataset max indices = 1886 1401 aera_interp: raw dataset max value = 2.9458816E-12 aera_interp: raw dataset max indices = 922 1320 forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source
anthro_emis 0000000000479C33 for__signal_handl Unknown Unknown libpthread-2.17.s 00007F9FA5BD45D0 Unknown Unknown Unknown libc-2.17.so 00007F9FA587D71C cfree Unknown Unknown anthro_emis 00000000004AE590 for_dealloc_alloc Unknown Unknown anthro_emis 000000000042D658 Unknown Unknown Unknown anthro_emis 000000000044A318 Unknown Unknown Unknown anthro_emis 000000000040C5E2 Unknown Unknown Unknown libc-2.17.so 00007F9FA581A495 __libc_start_main Unknown Unknown anthro_emis 000000000040C4E9 Unknown Unknown Unknown
My anthro_emis.inp file is as follows:
anthro_dir = '/nobackup/ee15amg/wrf3.7.1_data/emissions/EDGARv52015_CAMS2016_MEIC2017' domains = 1
src_file_prefix = 'EDGARv5_2015_CAMS_v4.2_2016_MEIC_v1.3_2017Malley' src_file_suffix = '_monthly_0.1x0.1.nc'
src_names = 'CO(28)','NOx(30)','SO2(64)','NH3(17)','BC(12)','OC(12)','PM2.5(1)','BIGALK(72)','BIGENE(56)', 'C2H4(28)','C2H5OH(46)','C2H6(30)'
sub_categories = 'emis_ind', ! CAMS, industrial non-power+CAMS, fugitive emissions+CAMS, solvent emissions 'emis_dom', ! CAMS, residential energy and other + CAMS, solid waster and waste water 'emis_tra', ! CAMS, off road transport+CAMS, road transport 'emis_ene', ! CAMS, power generation 'emis_ship', ! CAMS, shipping 'emis_agr', ! CAMS, Agricultural soils+CAMS, Agricultural livestock 'emis_awb', ! CAMS, Agricultural waste burning 'emis_tot', ! CAMS, total with awb 'emis_tot_no_awb', ! CAMS, total with awb 'emis_cds', ! EDGAR-v5, aircraft - climbing and descent 'emis_crs', ! EDGAR-v5, aircraft - cruise 'emis_lto', ! EDGAR-v5 aircraft - landing and take off 'emis_1A1_1A2', ! EDGAR-HTAPv2.2, CH4 only, Energy manufacturing transformation 'emis_1A3a_c_d_e', ! EDGAR-HTAPv2.2, CH4 only, Non-road transportation 'emis_1A3b', ! EDGAR-HTAPv2.2, CH4 only, Road transportation 'emis_1A4', ! EDGAR-HTAPv2.2, CH4 only, Energy for buildings 'emis_1B1', ! EDGAR-HTAPv2.2, CH4 only, Fugitive from solid 'emis_1B2a', ! EDGAR-HTAPv2.2, CH4 only, Oil production and refineries 'emis_1B2b', ! EDGAR-HTAPv2.2, CH4 only, Gas production and distribution 'emis_2', ! EDGAR-HTAPv2.2, CH4 only, Industrial process and product use 'emis_4A', ! EDGAR-HTAPv2.2, CH4 only, Enteric fermentation 'emis_4B', ! EDGAR-HTAPv2.2, CH4 only, Manure management 'emis_4C_4D', ! EDGAR-HTAPv2.2, CH4 only, Agricultural soils 'emis_4F', ! EDGAR-HTAPv2.2, CH4 only, Agricultural waste burning 'emis_6A_6C', ! EDGAR-HTAPv2.2, CH4 only, Solid waste disposal 'emis_6B', ! EDGAR-HTAPv2.2, CH4 only, Waste water 'emis_7A' ! EDGAR-HTAPv2.2, CH4 only, Fossil Fuel Fires
serial_output = .false. !data_yrs_offset = 2 ! make sure to update this! data_yrs_offset = 1 ! make sure to update this! emissions_zdim_stag = 1
! make sure to update these dates! start_data_time = '2016-01-01_00:00:00' stop_data_time = '2016-12-31_00:00:00'
emis_map = !'CO->CO(emis_tot)', !'NO->0.8NOx(emis_tot)', !'NO2->0.2NOx(emis_tot)', !'SO2->SO2(emis_tot)', !'NH3->NH3(emis_tot)', !'ECI(a)->0.1BC(emis_tot)', !'ECJ(a)->0.9BC(emis_tot)', !'ORGI(a)->0.1OC(emis_tot)', !'PM25I(a)->0.1PM2.5(emis_tot)'
/