LI-COR-Environmental / eddypro-engine

EddyPro Engine
GNU General Public License v3.0
38 stars 18 forks source link

Undefined storage fluxes on first time step #6

Open mcuntz opened 3 years ago

mcuntz commented 3 years ago

Dear Gerardo,

the storage fluxes of the first time steps are always undefined in EddyPro (SH_SINGLE, SLE_SINGLE, SET_SINGLE, SC_SINGLE, SH2O_SINGLE, SCH4_SINGLE). I can well understand where it comes from. However, it is a bit inconvenient in some use cases.

For example, we run normally one month of eddy data at a time, which takes about 2.5 hrs on our machine. Already there, the first time step of the month has missing storage fluxes. I have 16 processors on my machine. So I am using GNU parallel to run each day individually with eddypro_rp (less than 10 min with 11 processors). I am using a very little bash script to combine the results before running eddypro_fcc (I can sent it to you if interested). Works like a charm except that now every midnight has no storage fluxes.

Another use case is the calculation of the fluxes each day automatically. At the moment, we recalculate the whole month each day to limit the problem with the missing storage fluxes.

Would it be possible to use the file before the starting period for calculation of the storage fluxes if it exists in the directory?

Kind regards, Matthias

geryatejina commented 3 years ago

Hi Matthias, thanks for your suggestion. I understand what you're asking, but in full honesty I don't think this is going to happen any time soon (unless, of course, you feel like giving it a try). The reasons are mainly two:

  1. As you know, in many scenarios the 1-point storage calculation is highly speculative and has close to zero representativeness, so "investing" on that variable hasn't ever been a strong focus of EddyPro development.

  2. I can see how in your specific scenario the "file already present" in the folder could be automatically identified. But in the general case, the robust approach would be for the user to provide a path to that file via the .eddypro file (and hence via the GUI, too). Doing that only to the purpose of retrieving concentrations and temperatures from the last half-hour seems a bit overkill, not a good design. Perhaps a simpler approach would be to provide the user with the possibility to input those few values needed from the previous half-hour, directly into the .eddypro file. You could then automate a script to read those values from the previous result file and put it there. But either way, I doubt I could be working on this any time soon.

Hope that makes sense. Thanks! Gerardo

mcuntz commented 3 years ago

Hey Gerardo,

I understand that this is no priority. I would have though that more people automate the calculation of the fluxes and that this issue had come up before. But then, we are also measuring the atmospheric profile in our forest now and add the storage afterwards, i.e. ignore the EddyPro storage calculation. I thought that grassland or crop towers might actually use the 1-point estimates.

We have, however, not always measured the atmospheric profile and we will probably use the 1-point estimates on our old data; better than no estimate. But we can then run longer EddyPro batches, say of one year. I will think about the use case and give it a go perhaps, or perhaps not ;-)

Kind regards, Matthias

ankurdesai commented 3 years ago

I use the Linux version of EddyPro for automated flux processing for several of my sites (US-Syv, US-Los, US-CS*). For me, it’s easier to calculate storage separately, whether single point or profile and I can see how implementing this cleanly in eddypro for all cases of file inputs would be challenging. My workflow:

  1. Every hour, TOA5 data transferred from CSI logger by cell modem from tower to server

  2. Script (in IDL, sorry), each night sometime after 0 UTC Find all new data files that haven’t been processed, read, applying some range checks, discard “bad” lines/files (garbled datetime, incorrect # columns, …) Write or update daily biomet files (exactly 1440 lines per day 1 minute) Write daily “clean” fast files for eddypro (exactly 864000 lines per day for 10 Hz, no weird #s, all NANs to -9999), upload to Ameriflux FTP For each unprocessed day, copy the two files (fast and biomet) to a folder with appropriate .ini and .eddypro files Call eddypro on command line See if output (“essentials”) produced, if so, copy to archive folder Read output and add variables and diagnostics to annual file (17520 lines per non-leap year) Additional QA/QC, calculate storage (replace single point eddy pro output), upload to Ameriflux FTP Gap-fill, partition, output gap-filled file and Ameriflux format file, upload, email diagnostics (% missing variables) to me Manually upload once per quarter to Ameriflux to run QA/QC tests

  3. End of year - conduct manual QA/QC + Ameriflux QA/QC, apply any long-term calibrations, drift corrections, calc planar fit stats (if appropriate), re-run workflow (if no corrections, just re-read all output, call eddypro for any missing days) , recalculate storage, apply multi-tower met gap filling

-ankur

On Feb 25, 2021, at 7:47 AM, Matthias Cuntz notifications@github.com wrote:

Hey Gerardo,

I understand that this is no priority. I would have though that more people automate the calculation of the fluxes and that this issue had come up before. But then, we are also measuring the atmospheric profile in our forest now and add the storage afterwards, i.e. ignore the EddyPro storage calculation. I thought that grassland or crop towers might actually use the 1-point estimates.

We have, however, not always measured the atmospheric profile and we will probably use the 1-point estimates on our old data; better than no estimate. But we can then run longer EddyPro batches, say of one year. I will think about the use case and give it a go perhaps, or perhaps not ;-)

Kind regards, Matthias

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/LI-COR/eddypro-engine/issues/6#issuecomment-785870075, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAXQ7F6DT2H5RVQUNOXPLIDTAZBFHANCNFSM4YAEQ2OQ.

kebasaa commented 2 years ago

These changes are nice, but I'm not managing to compile on Linux. Firstly, your changes to the makefile change it to gfortran-8, but on Linux it should be simple gfortran. Secondly, see the other issue in this repo about compile errors. I hope someone can help me