CFMIP / COSPv2.0

COSP - The CFMIP (Cloud Feedbacks Model Intercomparison Project) Observation Simulator Package
41 stars 38 forks source link

Bug fixes in warm rain diagnostics (MODIS/CloudSat joint products) #47

Closed takmichibata closed 4 years ago

takmichibata commented 4 years ago

I have fixed some bugs in warm rain diagnostics (MODIS/CloudSat joint products) as reported in issue #39 and #41 from @dustinswales and @roj222. Two files, src/cosp.F90 and src/cosp_stats.F90, were edited due to the bug-fix. I also attach the output data from test run after modifying the bugs, as raised in issue #42. The test was conducted by intel fortran compiler, and completed without any errors. For reference, I put the output data in driver/data/outputs/UKMO/cosp2_output_um.ifort.bugfix.nc.

Many thanks, Tak

RobertPincus commented 4 years ago

@takmichibata Thanks for working on this. Before looking in detail at the code can make sure the automatic tests are passing? For example you can see large differences in https://github.com/CFMIP/COSPv2.0/pull/47/checks?check_run_id=712654828. @alejandrobodas can help with the automated tests.

takmichibata commented 4 years ago

@RobertPincus Thank you for your check. The automatic tests seem to be failed at "Compare test output against known good output (KGO)". The detailed error messages suggest that differences in some variables from KGO are larger than the tolerances. The error variables are all from the warm rain diagnostics (i.e., npdf and ncfodd) because the bug fixes in this time affect these output diagnostics. The other variables seem to pass the automatic tests successfully against the KGO without errors. If the modified codes are appropriate, could it be able to replace (update) the reference data, or check manually?

RobertPincus commented 4 years ago

@takmichibata Yes, we will want to replace the reference file as part of the pull request if the answers have changed because bugs have been fixed. Can you attach to the pull request some plots or similar showing that the results are now more correct? @alejandrobodas feel free to weigh in on the process here.

takmichibata commented 4 years ago

@RobertPincus Here I attach the plots showing CloudSat/CALIPSO total cloud fraction (black solid, left axis) and detected single-layered warm clouds (SLWC; color lines, right axis) for (a) initial version and (b) bug-fixed version. In the initial version (a), the figure suggests that sampling of SLWCs does not work appropriately due to the inconsistency of vertical level between _fracout and dbze94, when users specify USE_VGRID=.true.. On the other hand, the revised version (b) detects more SLWCs, which seems consistent with CloudSat/CALIPSO cloud fraction and (c) MODIS low cloud cover in particular. In the most right point (loc=153 in X-axis), precipitating clouds are detected in the bug-fixed version as shown in the blue line, it is also consistent with "ptcloudsatflag3" which indicates "rain certain" (not shown). So I feel that the revised codes are now working appropriately.

Pull-request_n47-1
RobertPincus commented 4 years ago

@takmichibata Thanks for providing this figure. Since you are confident the answers are now correct, I suggest that you replace the "known-good-output" file in the repository so the automated tests pass.

The COSP PMC will review the figures (we are even meeting in about 12 hours) and either ask for more evidence or simply accept the pull request.

Thanks for being patient as we figure out new ways of working.

alejandrobodas commented 4 years ago

@takmichibata thanks for the making these changes. As @RobertPincus said, if you are confident about the new results, you can replace the file driver/data/outputs/UKMO/cosp2_output_um.gfortran.kgo.nc with the new outputs. You can retrieve the new outputs from the 'Artifacts' icon in the top-right area of: https://github.com/CFMIP/COSPv2.0/pull/47/checks?check_run_id=712654828

Once you push the changes to the repository, the tests should pass.

takmichibata commented 4 years ago

@alejandrobodas Thank you for telling me about the next step. Just to check, my output file in this revision is named as "cosp2_output_um.ifort.bugfix.nc" currently, is it no problem to upload? Or should I rename the new output as filename "cosp2_output_um.gfortran.kgo.nc" before uploading?

alejandrobodas commented 4 years ago

Hi @takmichibata yes, you will need to replace the current cosp2_output_um.gfortran.kgo.nc with the output of your test and push the changes.

alejandrobodas commented 4 years ago

Sorry @takmichibata I didn't explain the process very clearly. The file that you need to use is the output generated by the tests that are run in the github servers. You can download this file by clicking on 'Artifacts' in: https://github.com/CFMIP/COSPv2.0/pull/47/checks?check_run_id=720558490

The image below shows where that link is located: artifacts

Then, you can replace cosp2_output_um.gfortran.kgo.nc with the file you've downloaded, commit and push the changes.

RobertPincus commented 4 years ago

To amplify on why this is: the automated tests/continuous integration on Github uses quite strict tolerances when comparing results with the Gnu compiler, which is considered the reference, and more relaxed tolerances when comparing to the Intel compiler.

takmichibata commented 4 years ago

@alejandrobodas @RobertPincus Thank you very much for your kind supports. I replaced the existing KGO file with the new output I tested. The automatic test now goes well.

takmichibata commented 4 years ago

@RobertPincus Thanks for your code review in this revision. Please see my replies to your individual comments.

alejandrobodas commented 4 years ago

@takmichibata I've looked at the code and it looks good to me. I have one question regarding speed. Would you be able to quantify how much time is added to the calculations when these diagnostics are included? You can compare the outputs of "Time to run COSP" when you run the test in your local system with and without the warm rain diagnostics. In order to reduce the impact of variability, it is worth repeating the test a few times and report the average and fastest times. Thanks!

takmichibata commented 4 years ago

@alejandrobodas Thanks for your code review and suggestion. After this revision, I evaluated the computational cost regarding to the warm rain diagnostics based on two configurations; the one is that the all flags in cosp2_output_nl.txt are .true., and the other configuration is that the flags for warm rain diagnostics are .false.. Both configurations were run for ten times, and their mean values are below: (a) with warm rain diagnostics Time to read in data: 8.9991003E-03 Time to initialize: 3.7991995E-03 Time to construct types: 1.3897900E-02 Time to compute optics: 7.8888200E-02 Time to run COSP: 6.2590301E-02 Total time: 0.1681747 Time to write to output: 1.6197799E-02

(b) without warm rain diagnostics Time to read in data: 8.7989002E-03 Time to initialize: 3.8992996E-03 Time to construct types: 1.3598200E-02 Time to compute optics: 7.9188101E-02 Time to run COSP: 4.4093104E-02 Total time: 0.1495776 Time to write to output: 1.3298001E-02

The fastest values (Total time) are 0.167974 and 0.148978, respectively, suggesting that the warm rain diagnostics increase computational cost by approximately 12% under the ifort compiler.