OzFlux / PyFluxPro

PyFluxPro V3.4 is a significant upgrade from previous versions. It has several new features, improved stability and is introduced ahead of the 2021 OzFlux Data Workshop.
GNU General Public License v3.0
21 stars 5 forks source link

Creating an aws nc file #88

Open Herr-Chang opened 1 year ago

Herr-Chang commented 1 year ago

Hi Peter, It was an amazing experience testing the examples through the L1-L6 processes of PyFluxPro, thank you very much for sharing the program and the thorough descriptions in wiki. I am trying for analyzing my site flux data (in Taiwan) using PyFluxPro but was stucked in L4 where I would like to fill gaps of meteorological data using nearby weather stations. I have the data in csv and the respective metadata. However, I have no idea in transforming the data to an nc file. I am wonering if you could kindly give me an instruction of making an aws nc file? Best regrads, Shih-Chieh Chang

pisaac-ozflux commented 1 year ago

Hi Shih-Chieh Chang,

Many thanks for trying PyFluxPro, I hope it can be of use in your research. You can generate your own AWS file in the same way you generate your own netCDF file from data in an Excel file at L1. The steps would be: 1) import your CSV data into Excel 2) save as a .xls or .xlsx file 3) create your own L1 control file using one of the templates in PyFluxPro/controlfiles/templates or one of the example L1 files. This is a little tedious but you only have to do it once. 4) run PFP at L1 using your L1 control file, this will read your AWS data and create a netCDF file that can be used at L4

Of course, the AWS data needs to be on the same time step as your tower data. Also, PFP will read CSV files directly, you don't have to import into Excel, see the example control file in the L1 templates. If you have any problems, let me know and we can have a one-on-one Zoom session. You might also want to think about using ERA5 data for gap filling at L5. Cheers, Peter

Herr-Chang commented 1 year ago

Hi Peter,

I did not expect that I could receive your reply on SUNDAY!!!

Thank you very much for this prompt reply. I will try making the aws file and hopefully can go through the whole processes.

I previously used reddyproc for post processing my Eddypro outputs. However, I could not fully understand the procedures due to a lack of clear descriptions. Perhaps your PyFluxPro could be implemented in TaiwanFlux, which is consisted of 5 long-term sites but uses till now no common data management protocol. I will definitely come to you again for your help.

Best,

Shih-Chieh


Dr. Shih-Chieh Chang

Professor Department of Natural Resources and Environmental Studies, National Dong Hwa University 974 Hualien, Taiwan

Tel: +886-3-8903275 <tel:03%20863%203275> Fax: +886-3-8903260 <tel:03%20863%203260>

Lab homepage: https://www.lter-ndhu.org


From: Peter Isaac @.> Sent: Sunday, January 29, 2023 5:01 PM To: OzFlux/PyFluxPro @.> Cc: Herr-Chang @.>; Author @.> Subject: Re: [OzFlux/PyFluxPro] Creating an aws nc file (Issue #88)

Hi Shih-Chieh Chang, Many thanks for trying PyFluxPro, I hope it can be of use in your research. You can generate your own AWS file in the same way you generate your own netCDF file from data in an Excel file at L1. The steps would be:

  1. import your CSV data into Excel
  2. save as a .xls or .xlsx file
  3. create your own L1 control file using one of the templates in PyFluxPro/controlfiles/templates or one of the example L1 files. This is a little tedious but you only have to do it once.
  4. run PFP at L1 using your L1 control file, this will read your AWS data and create a netCDF file that can be used at L4 Of course, the AWS data needs to be on the same time step as your tower data. Also, PFP will read CSV files directly, you don't have to import into Excel, see the example control file in the L1 templates. If you have any problems, let me know and we can have a one-on-one Zoom session. You might also want to think about using ERA5 data for gap filling at L5. Cheers, Peter

— Reply to this email directly, view it on GitHub https://github.com/OzFlux/PyFluxPro/issues/88#issuecomment-1407603616 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AQ7JH2XR75ZCDUC7OVF4M5LWUYWT5ANCNFSM6AAAAAAUKCJNWE . You are receiving this because you authored the thread. https://github.com/notifications/beacon/AQ7JH2XFBTGTADDXYETYVX3WUYWT5A5CNFSM6AAAAAAUKCJNWGWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTST4ZJ2A.gif Message ID: @.***>

pisaac-ozflux commented 1 year ago

Hi Shih-Chieh, I'm afraid I have some harsh deadlines looming so have been putting in some extra hours. I understand your concerns regarding REddyProc, having used it myself. There is not much documentation. That was one of the reasons I started the PyFluxPro wiki, see https://github.com/OzFlux/PyFluxPro/wiki. I would be very happy for PyFluxPro to be used in Taiwan and would be happy to run some demonstration and training sessions via Zoom if that would help you or others. There are quite a few features that have not yet been documented in the wiki. Happy to meet, let me know if you would like that. Cheers, Peter

pisaac-ozflux commented 1 year ago

And feel free to contact me by email at pisaac.ozflux@gmail.com

Herr-Chang commented 1 year ago

Hi Peter, A online workshop would be of great help for the local flux community. Let us organize one in the near future! Cheers, Shih-Chieh

pisaac-ozflux commented 1 year ago

Hi Shih-Chieh,

I tried to create an AWS netCDF file using the PyFluxPro L1 processing, as I suggested yesterday, and found that it did not work due to some recent changes to the code that checks metadata at L1. Apologies for the mistake.

I have fixed the problem and it is now possible to use PFP to create a netCDF file from a CSV file of AWS data. You will need to pull the most recent version from GitHub to get the changes.

I have also produced some example files so that you can see how this process works. The example files are available from the link below: https://cloudstor.aarnet.edu.au/sender/?s=download&token=14f4614a-2e3d-48a7-b519-ba5c957dedfd

The files are:

  1. Loxton_AWS.csv - a csv file of AWS data for the Loxton example site.
  2. Loxton_AWS.xlsx - an Excel file created by importing the CSV data into Excel.
  3. L1_aws_excel.txt - a PyFluxPro control file for use at L1 to create a netCDF file of AWS data from the Excel file.

Let me know if you have any problems.

Cheers, Peter

pisaac-ozflux commented 1 year ago

Just realised the file names in the L1_aws_excel.txt are wrong, you'll need to edit them for your case.

P.

Herr-Chang commented 1 year ago

Hi Peter, I am still working on the csv file of an alternative weather station. Thanks for informing me the changes before I run L1. Cheers, Shih-Chieh

Herr-Chang commented 1 year ago

Hi Peter, I got one question about air pressure. In the input xls file for L1, we have air pressure from LI7500 and air pressure sensor of the site. Both air pressure are station air pressure, not the calculated surface air pressure. However, in the aws example file, the air pressure is surface air pressure. I am wondering, when the site air pressure needs gap filling from the aws air pressure, if the surface air pressure will be first calculated to station air pressure through the correction of elevation difference? The aws stations of my study site provide only station air pressure. I therefore need to know if I should transform them to surface air pressure before I make the aws nc file. Cheers, Shih-Chieh

Herr-Chang commented 1 year ago

And for the unit of air pressure, I see both mbar and kPa in the example files: mbar in the Loxton_L1.xls and kPa in the Loxton_AWS.xlsx. Does it mean that I can use them as long as I specify the units clearly? Shih-Chieh

pisaac-ozflux commented 1 year ago

Hi Shih-Chieh,

We should probably move this conversation to email at some point, it is a little easier to handle that the GitHub Issues process.

Answering your points in order:

  1. The L4 gap filling process uses least-squares to derive a linear relationship between the AWS and tower data for each window (typically 3 months wide). The linear relationship is then used to remove bias between the AWS and tower data. This process accounts for elevation differences between the AWS and tower sites. See https://github.com/OzFlux/PyFluxPro/wiki/Level-4#overview for details.
  2. The default units for pressure are kPa. You can read in pressure in hPa and use the hPa to kPa function available at L1 to convert from hPa to kPa, see Adding a Function to a variable in https://github.com/OzFlux/PyFluxPro/wiki/Level-1#overview.

Cheers, Peter