NGEET / fates

repository for the Functionally Assembled Terrestrial Ecosystem Simulator (FATES)
Other
99 stars 92 forks source link

Reading lightning data for use with SPITFIRE #562

Closed slevis-lmwg closed 4 years ago

slevis-lmwg commented 5 years ago

I have competed two FATES fire simulations with the default lightning parameter ED_val_nignitions... 1) At the CZ2 site with 100% ponderosa pine initialized with observed stand data (point simulation) 2) On a 2-D transect that includes CZ2 as one of the grid cells; started from bare ground, ran 20 years without fire, then continued with fire

I encountered a problem, however, when I replaced the scalar ED_val_nignitions in this line currentPatch%NF = ED_val_nignitions * currentPatch%area/area /365 with the vector lnfm24(g), the latter being the daily average of 3-hourly lightning data from a 2-D dataset. I used these two lines in subroutine area_burnt to locate the index g: p = currentCohort%pft g = patch%gridcell(p) The problem is that fire can eliminate currentCohort%pft entirely, so the model crashes when I ask for it.

I have a temporary fix. In subroutine fire_intensity I have replaced... currentPatch%fire = 1 ! Fire... :D with if (currentPatch%total_tree_area + currentPatch%total_grass_area > 0._r8) then currentPatch%fire = 1 ! Fire... :D else currentPatch%fire = 0 end if

@jkshuman does not like this because this way a bare ground grid cell full of dead litter cannot burn.

My question @rgknox @ckoven @pollybuotte @lmkueppers @ekluzek is this: Can anyone recommend an alternate way of locating the index g? I think this would eliminate the problem altogether.

Notes:

jkshuman commented 5 years ago

Thanks for opening this @slevisconsulting. tagging @rosiealice for her take as well My concern is that this temporary fix of requiring tree or grass crown area in the patch for fire has implications for regeneration and regrowth. If the ground fuel can support a fire, I think we should burn it. This connects to the larger issue that fire should in reality be impacting the soil bed as it relates to regeneration. This is happening at the patch on a day, so maybe things work out at the site scale? It is not clear to me how this may alter fire behavior overall. I want to discuss this broadly in terms of longterm behavior and expectations for the model, specifically how this might impact regeneration (short-term) and migration (long-term development).

This build up of fuel is also closely related to fire duration (we have a cap) and that is in need of development. (on my list of things to do.)

slevis-lmwg commented 5 years ago

The 2-D simulation with 2-D lightning data fails and shows me that, what I thought worked in the 1-D simulation, worked only because I was in 1-D and g always equaled 1.

@jkshuman in our call you mentioned some met variables (eg. precip) used by FATES. I will look at how they are used and will try to mimic.

slevis-lmwg commented 5 years ago

Seems to work now and, as Jackie and I hoped, without requiring total_tree_fraction or total_grass_fraction > 0.

rgknox commented 5 years ago

@slevisconsulting , how are things going here? Feel free to link the branch that you are using and I can look over the code and see if things make sense to me too.

One thing I'm noticing, is that by bringing in a spatial lightning dataset, there are CLM data structures that need to be accessed in the fire code. This is something that we could add to the coupling interface (and I could help with as well) if need be.

jkshuman commented 5 years ago

@slevisconsulting glad you checked functionality before we got into a big discussion on the regeneration impacts. and even better news that you fixed it. @rgknox putting this into the coupler makes sense. Reading a lightning dataset will become more standard for running the fire model as we move forward. (Though I want to retain the ability to just use a static value for testing.)

slevis-lmwg commented 5 years ago

@lmkueppers @jkshuman at Friday's meeting we talked about making a list of fire-related model variables that we might compare to obs if obs were available. As a starting point, here are the fire-related variables that I have found in the CLM-FATES history output:

FIRE_AREA (fraction of grid cell) FIRE_INTENSITY (kJ/m/s) FIRE_NESTEROV_INDEX (unitless) FIRE_ROS i.e. Rate of spread (m/min) FIRE_TFC_ROS i.e. Total fuel consumed (unitless) Fire_Closs (gC/m2/s) Carbon loss due to fire SCORCH_HEIGHT (m) SUM_FUEL related to ROS, so omits 1000-hr fuels (gC/m2) fire_fuel_bulkd (?) Fuel bulk density fire_fuel_sav (?) Fuel surface-to-vol ratio fire_fuel_mef (units?) Fuel moisture FIRE_FUEL_EFF_MOIST (units?) different than previous variable? FUEL_MOISTURE_NFSC (?) Size-resolved fuel moisture M5_SCLS (#/ha/yr) Fire mortality by size

jkshuman commented 5 years ago

@slevisconsulting let's open a separate issue for tracking these variables, but sounds good. I can add details as well and clarify a few variables.

jkshuman commented 5 years ago

@slevisconsulting @lmkueppers @rosiealice the read lightning data works with the general CLM code across the tropics! Run is still going, but definitely different in space and time in the first two years. Thanks @slevisconsulting

rosiealice commented 5 years ago

Cool. Nice job everyone!

jkshuman commented 5 years ago

@slevisconsulting @lmkueppers @pollybuotte @rgknox there is a problem with the lightning branch. I am not sure if it is on the ctsm side where I am using https://github.com/rgknox/ctsm/tree/api_8_works_w_read_lightning or on the fates side where I am using https://github.com/jkshuman/fates/tree/ignition-fixes-tag130_api8. The runs I did with these branches initially have burned area, but then by year 5 there is no fuel and burned area goes to zero. For development of the #572 and #573 I suggest using the master branch and then these respective crown fire branches, until the lightning piece is sorted out.

master tag 130 and api8 was tested with PR #524 and #561 (both in master already) and these both behave "normally" out to year 10 with respect to fire behavior across tropics.

slevis-lmwg commented 5 years ago

@jkshuman here are some additional clues:

1) The lightning code worked before I updated to newer versions of ctsm/fates. These were the ctsm/fates versions when it worked: api_7.3/read_lightning_works_w_api_7.3 I have shown figures from 1D (CZ2) simulations with established stand initial conditions and 2D (transect) simulations with bare ground initial conditions that ran for 30+ years and kept burning throughout the runs.

2) On 9/16 I posted that I was encountering the same problem in my active crown fire Pull Request, which does not include the lightning code. The problem still occurs when I shut off crown fire. I branched my active_crown_fire branch from @jkshuman's passive_crown_fire branch #572 .

Before reading @jkshuman's post I was about to run with my updated lightning branch to determine whether or not the problem was with #572 , but you have now confirmed that the updated lightning branch has the same problem.

My question to @lmkueppers is this: Considering that @pollybuotte is about to start thousands of ensemble simulations using the new lightning code, do you prefer that:

I will wait for @lmkueppers's decision on this before I complete additional work.

lmkueppers commented 5 years ago

Hm. This is annoying... I'm hoping that @rgknox or @glemieux have some insights here.

@pollybuotte is planning to run with CLM lightening across a transect with fire on, so would need this to be working. I don't know which version of FATES she needs to use (or the diffs between these options). @slevisconsulting, I think it's worth getting this debugged so we're not moving backwards with Polly's runs. Thanks.

jkshuman commented 5 years ago

@slevisconsulting @lmkueppers @rgknox I added a few details to my comment last night. Running with master branch that has PR #561 the fire behavior is normal through 18 years in the tropics. So it is something unique to those other branches. (This is good that master is still running normally!)

There was a conflict with @slevisconsulting lightning branch, so maybe I didn't resolve it correctly. There is also one commit on those branches (read_lightning and passive crown fire) https://github.com/jkshuman/fates/commit/8b44706fe400eb9c3d32ebe7cb00d65daf76f65b that is not on master. Will revert it and test.

jkshuman commented 5 years ago

@slevisconsulting @lmkueppers @rgknox my initial testing through 10 years shows that reverting that commit seems to fix the problem. I am testing it within passive crown fire branch and the read lightning branch https://github.com/jkshuman/fates/tree/passive_crown_fire and https://github.com/jkshuman/fates/tree/read_lightning-ignfix-tag130-api8 perhaps @rgknox can weigh in on why these changes caused this fail, but hopefully this gets us back on track. will let you know of these tests look. simulations running now.

rgknox commented 5 years ago

The runs I did with these branches initially have burned area, but then by year 5 there is no fuel and burned area goes to zero. For development of the #572 and #573 I suggest using the master branch and then these respective crown fire branches, until the lightning piece is sorted out.

Question: if there is no fuel, this could either be because 1) there is no influx of new fuel, 2) excesive burning of existing fuel or 3) a bug that is sending it to the ether (which our checks would catch).

Do you have a sense of which this might be, or did I miss a possibility? I can't imagine case 1 if there are any live plants left, so they whole ecosystem must had collapsed and burned right?

jkshuman commented 5 years ago

@rgknox I have stepped onto the event horizon and uncovered a worm hole into instability, at least that is how this fail feels. @rgknox it seems to be something with the fuels. The system converts to almost all grass in my tests, but the fuels are not mapped correctly. The live grass is disconnected from the fuel pool structure by year 2. There are periodic fires, but not in the style that I am used to. The whole thing is completely strange. good news is that reverting that commit restores balance to the universe. https://github.com/jkshuman/fates/commit/8b44706fe400eb9c3d32ebe7cb00d65daf76f65b

A few more simulation years and I will push the changes to my branches. (passive crown fire and read lightning)

jkshuman commented 5 years ago

@slevisconsulting @pollybuotte please test the updates to see if things behave.

jkshuman commented 5 years ago

@slevisconsulting please check the FUEL_MOISTURE_NFSC as another diagnostic of the fire behavior in regards to this issue. So far things have not totally failed for both a lightning test and a crown fire test, but I may be missing something.

jkshuman commented 5 years ago

@slevisconsulting @pollybuotte sadly the read lightning branch also failed in year 5. I am trying to create a new branch as a merge between pr #561 and @slevisconsulting branch https://github.com/slevisconsulting/fates/tree/read_lightning

My test with master for PR #561 is good through year 33.

jkshuman commented 5 years ago

@slevisconsulting @pollybuotte @lmkueppers @rgknox I have a NEW functional branch with read lightning based off of PR #561 from master. https://github.com/jkshuman/fates/tree/read_lightning_pr561 It appears to be reading the lightning file, and functioning just fine through year 6 across the tropics (normal fuels extent, fire area, etc.) Please test, and let me know. Not sure where the fail is inside those other branches of mine. Will think of a plan for the passive crown fire branch. what a hassle.

slevis-lmwg commented 5 years ago

@jkshuman thank you for trying to figure this out. I'm trying to test your new branch, but I'm coming across a different problem. My ./case.submit gives an error. Are you using @rgknox 's api8 that was modified for read_lightning or something else?

jkshuman commented 5 years ago

@slevisconsulting yes, I am using Ryan's read_lightning api branch. @lmkueppers @pollybuotte @rgknox happy to report this run was successful through year 21. Please confirm that it is also working in CA.

slevis-lmwg commented 5 years ago

Great news @jkshuman ! One more question, with the hope of narrowing down my ./case.submit error: Are you running on cheyenne? Or izumi/hobart? The error that I get is on izumi.

jkshuman commented 5 years ago

hobart.


Jacquelyn Shuman, PhD Project Scientist Terrestrial Sciences Section National Center for Atmospheric Research PO Box 3000 Boulder, Colorado 80307-3000 USA

jkshuman@ucar.edu office: +1-303-497-1787

On Tue, Oct 8, 2019 at 12:22 PM Samuel Levis notifications@github.com wrote:

Great news @jkshuman https://github.com/jkshuman ! One more question, with the hope of narrowing down my ./case.submit error: Are you running on cheyenne? Or izumi/hobart? The error that I get is on izumi.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NGEET/fates/issues/562?email_source=notifications&email_token=AFIUHBTTWODHFTBWQMSRBT3QNTFXVA5CNFSM4IJAPWG2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAVEELQ#issuecomment-539640366, or mute the thread https://github.com/notifications/unsubscribe-auth/AFIUHBX4I7FTMDD6YW22MD3QNTFXVANCNFSM4IJAPWGQ .

slevis-lmwg commented 5 years ago

Got rid of the ./case.submit error. (In case this helps others: I had forgotten to place the three user_datm.streams.txt.CLMCRUNCEP.* files needed for our 2D transect runs in my case directory :-))

@jkshuman, results from my test run look correct in the first few years. Again, thank you for removing the bug (which, if I understood correctly, still remains unidentified in the versions that fail).

@pollybuotte if you would like me to double check one of your 2D transect cases before you embark on the multi-thousand ensembles, pls let me know.

pollybuotte commented 5 years ago

@slevisconsulting it would be great if you could run the elevation transect grid from bare ground to check. Domain file is domain.lnd.CA_ssn_czo_4km.nc Thanks!

slevis-lmwg commented 5 years ago

That is the one that I'm testing, and it continues to work correctly this far.

@slevisconsulting it would be great if you could run the elevation transect grid from bare ground to check.

jkshuman commented 5 years ago

@slevisconsulting @pollybuotte @rgknox yep, that bug is unidentified, and it is not worth my time to hunt down the corruption. I will probably rename that branch as BAD on my repo, and then delete it once we get rolling forward with these simulations.

glemieux commented 5 years ago

@jkshuman I'll keep an eye out for the branch rename and see if I can ID what the bug might be.

jkshuman commented 5 years ago

@glemieux careful out there on the event horizon before stepping into the wormhole of fail on this one. https://github.com/jkshuman/fates/tree/ignition-fixes-tag130_api8 https://github.com/jkshuman/fates/tree/read_lightning-ignfix-tag130-api8

The interesting thing about this is that the PR #561 was from MY branch! So that is a big clue...

slevis-lmwg commented 5 years ago

More good news: With this bug now removed, I went back and differenced the read_lightning_pr561 branch (https://github.com/jkshuman/fates/tree/read_lightning_pr561) from my active_crown_fire branch/PR (https://github.com/jkshuman/fates/pull/5) and discovered the cause of the no-fire-after-yr-1 problem. I pushed my corrections to that PR with an explanation of the solution to the problem.

jkshuman commented 5 years ago

@slevisconsulting that's good. I will implement this fix into passive_crown_fire and test. Good news that I don't need to roll things all the way back in that branch. EDCohortdynamics breaking the world....

rgknox commented 5 years ago

If you do believe that there is a bug in master, or something that needs attention (ie a vulnerability that could enter master), could either of you (@jkshuman or @slevisconsulting) encapsulate it in a specific issue? I'm not clear on where the problem is. For instance I'm seeing reference to EDCohortdynamics, but not sure what the reference refers to. Or maybe, the bug was not necessarily isolated and identified yet, but was circumvented somehow by using the correct mixture of branches and commits?

jkshuman commented 5 years ago

@rgknox I am chatting with @glemieux and I will update you on my test. There was one line in zero_cohort that I had deleted and I forgot to revert it yesterday. In my busy hasty day, I literally didn't see that on the commit until Greg and I looked at it today. Added in the zero for fraction_crown_burned in EDCohortdynamics and testing now.

jkshuman commented 5 years ago

@rgknox @glemieux rest assured there is not a hidden bug, but there may be a vulnerability? I mistakenly introduced this behavior, so not hidden, but we should certainly be aware of why things went haywire. This test address that. the test I ran which revert all parts of the jkshuman@8b44706 commit fixes the buggy behavior. (lesson to self, just revert the commit rather than doing it by hand - I missed an obvious line deletion in my haste...) Specifically in the bad version, the fire area quickly becomes odd (small point values across SA rather than broad areas), fuel moisture shows the same bad point pattern where it should be broad coverage even with burning, and TLAI shows coverage of vegetation suggesting fuel is present furthering suggesting this is bad behavior. In that bad commit there were three variables that were removed from zero_cohort (frac_crown_burned, crownfire_mort, cambial_mort). I mistakenly left out frac_crown_burn when I reverted that commit manually. It may be worth figuring out why this one variable (or set of variables) creates this bad behavior. Adding a reference of this issue to these zero_chort nan_cohort issues #575 and #231 so we can revisit my trip to the event horizon. Fun times. I attach a few screenshots of the simulations (left is correct, right is the buggy version). simulations on Hobart: FIX: /scratch/cluster/jkshuman/archive/Fire_zero-frac-crown-test_4x5_9c12e402_0760b91a/lnd/hist BAD: /scratch/cluster/jkshuman/archive/Fire_lightning_test_junk_4x5_a1a8efe5_bc1d27ed/lnd/hist

FIRE_AREA Yr5_Zero_frac_crown_burn_left_EDCohort_right_SFMain Fire_area_yr5_Zero_frac_crown_burn_left_EDCohort_right_SFMain Fuel_NFSC_yr5_Zero_crown_frac_burn_left_EDCohort_right_SFMain Fuel_NFSC_yr5_Zero_crown_frac_burn_left_EDCohort_right_SFMain TLAI_yr5-Zero_frac_crown_burn_left_EDCohort_right_SFMain TLAI_yr5-Zero_frac_crown_burn_left_EDCohort_right_SFMain

glemieux commented 5 years ago

@jkshuman and I chatted about this yesterday; my guess is that this behavior is the result of the compiler trying to interpret what to do with frac_crown_burned since it was an uninitialized variable. I'm working on a fix in CTSM for a similar issue (#548) right now. @rgknox if this is the case, it's probably another reason to try and adopt compiler flag options and other diagnostics during testing as you suggested up in https://github.com/ESMCI/cime/issues/3205.

jkshuman commented 5 years ago

@glemieux @rgknox I didn't think much of making this change at the time. frac_crown_burned is in the nan_cohort, and it is set to zero inside SFMain. Was not expecting this sort of fail. (another lesson to self is to test everything...)

https://github.com/NGEET/fates/blob/1ad93c311ed1a5d45df656b9031c8714a7071e64/biogeochem/EDCohortDynamicsMod.F90#L537

https://github.com/NGEET/fates/blob/1ad93c311ed1a5d45df656b9031c8714a7071e64/fire/SFMainMod.F90#L903-L928

jkshuman commented 4 years ago

@slevisconsulting @pollybuotte @lmkueppers I just returned from the FireMIP conference, and was alerted to the fact that using the LIS lightning data requires a scaling adjustment to account for the amount of cloud to ground flashes (.20) and the efficiency/energy required to generate fires (0.04). I confirmed in the CLM code that the Li fire model uses .22 as a scaling factor on this data. We will need to update this for these simulations. and make a decision to use .20 as many fire models do, or both 0.20 and 0.04. With this it will give natural fires only. Anthropogenic fires would be handled with a different set of equations.

slevis-lmwg commented 4 years ago

Good catch @jkshuman thank you!

slevis-lmwg commented 4 years ago

And welcome back!

jkshuman commented 4 years ago

@slevisconsulting thanks! I highly recommend visiting South Africa. I am testing this scaler across tropics and SA in a 4x5 run. Realizing that we have no connection between fire and ignitions outside of area burn. Will open a separate issue on this...for better tracking.

slevis-lmwg commented 4 years ago

[...] using the LIS lightning data requires a scaling adjustment to account for the amount of cloud to ground flashes (.20) and the efficiency/energy required to generate fires (0.04). I confirmed in the CLM code that the Li fire model uses .22 as a scaling factor on this data. We will need to update this for these simulations. and make a decision to use .20 as many fire models do, or both 0.20 and 0.04.

Regarding the question of one or both scaling factors, we need to make sure not to double count. Is it possible that the 0.04 factor corresponds to SPITFIRE's FDI? AB = size_of_fire * NF * currentSite%FDI in subr. area_burnt where AB is area burned, NF is number of ignitions, and FDI is the ignition potential.

jkshuman commented 4 years ago

@slevisconsulting I am testing a few changes, and found a change you made to FDI with read lightning data that needs to be reverted. The FDI calculation should not be changed. The ignitions dataset only provides strike data, not successful ignitions. Successful ignitions is related to fuel conditions and climate. So this new conditional should be removed, and the original equation retained.

https://github.com/slevisconsulting/fates/blob/dc8f83a24164ea0f28ad8b385cb86e563214deb8/fire/SFMainMod.F90#L1037-L1041

slevis-lmwg commented 4 years ago

@slevisconsulting I am testing a few changes, and found a change you made to FDI with read lightning data that needs to be reverted. The FDI calculation should not be changed. The ignitions dataset only provides strike data, not successful ignitions. Successful ignitions is related to fuel conditions and climate. So this new conditional should be removed, and the original equation retained.

https://github.com/slevisconsulting/fates/blob/dc8f83a24164ea0f28ad8b385cb86e563214deb8/fire/SFMainMod.F90#L1037-L1041

@jkshuman I disagree, unless I have misunderstood your comment:

The first half of this if-statement corresponds to cases when the input data literally represent successful ignitions rather than lightning strikes, e.g. Bin Chen's data. @lmkueppers group would like to keep this option as far as I know. In fact, now I realize that we need a similar if-statement to bypass the Cloud-to-Ground coefficient when using Bin's data...

jkshuman commented 4 years ago

@slevisconsulting I agree that Bin's successful strike dataset is a special case, and should use a conditional for FDI and, yes, would need to bypass that Cloud to ground reduction on lightning. For the FDI conditional on Bin's data, @slevisconsulting @lmkueppers @pollybuotte it would be worth adding a flag or print statement in situations where there is an ignition and FDI is set to 1, but the FDI would have been low ignition potential without this data. We could print both the ignition FDI of 1 from Bin's strike data and the calculated FDI, and then look at that with the climate and fuel data. Those differences would provide information about how the climate forcing data and the acc_NI or this equation are potentially missing details. That would be a nice evaluation with Bin's data, and a nice evaluation of this part of the fire model. Let's talk about that. (FDI affects area burn and fire duration, so these differences carry through to other parts of the fire code.)

pollybuotte commented 4 years ago

I'd like to request an additional check that the lightning file has been read. This would prevent a user from running with the wrong CLM branch. As it is now, running with fates_next_api does not cause a fail, but no fire results because there are no ignitions.

slevis-lmwg commented 4 years ago

@jkshuman updated me as follows: The lightning work was updated to a more recent FATES branch here: FATES branch: https://github.com/jkshuman/fates/tree/fire-threshold-tag131-api80-lightning Corresponding CTSM branch: https://github.com/rgknox/ctsm/tree/api_8_works_w_read_lightning

Also... @pollybuotte encountered an error when setting use_spitfire = .false. I asked her to try the following:

In src/main/clm_initializeMod.F90, where you see if (use_fates) then call sfmain_inst%initAccVars(bounds_proc) change the first line to say if (use_fates .and. use_fates_spitfire) then Do the same in src/main/clm_driver.F90, where you see if (use_fates) then call sfmain_inst%UpdateAccVars(bounds_proc)

Need to add use_fates_spitfire also to corresponding lines use clm_varctl, only: ...

@pollybuotte confirmed that the above works.

jkshuman commented 4 years ago

@slevisconsulting @pollybuotte @ckoven In the interim as this development happens, I merged old lightning into a more recent FATES tag: FATES tag: https://github.com/jkshuman/fates/tree/tag_sci.1.33.1_api.8.1.0-lightning CTSM tag: https://github.com/jkshuman/CTSM/tree/api_8.1.0_works_w_read_lightning

I am testing in tropics and it seems fine so far. @pollybuotte said she would test in CA. Let me know if there is a problem anywhere. (I hate messing with the api...)

slevis-lmwg commented 4 years ago

Inviting @ekluzek to #562.

@ekluzek will open a corresponding issue on the ctsm side with his proposed approach. Thank you for your help with this, Erik.

rgknox commented 4 years ago

A plan we talked about is to change the use_fates_spitfire namelist parameter on CLM from a binary switch, to an integer flag, where: 0 means its off 1 means spitfire should be active, but without external datasets 2, 3, ... various different dataset combinations are available to FATES spitfire from the HLM and should be expected (l strikes, human density, GDP, etc)

@jkshuman @rosiealice @lmkueppers @ckoven