Closed DanielAdriaansen closed 3 months ago
@DanielAdriaansen, could you fill out the pull request template? It would be useful to know what testing is expected of the reviewers and if the differences in the use case output are expected.
@DanielAdriaansen, could you fill out the pull request template? It would be useful to know what testing is expected of the reviewers and if the differences in the use case output are expected.
Done, sorry about that. I had this in draft and converted it as ready but forgot to go back and add the details.
Documentation changes from this PR
Looking closer at the difference tests from CI, I note that there were unexpected differences in the FCST data. However, way back I fixed a bug here: https://github.com/dtcenter/METplus/issues/2388#issuecomment-1787789180, and the differences were only in MAM and JJA so that is reassuring and DJF and SON show zero differences. Thus, the formulation of TCI remains the same, and is being called correctly for the FCST.
I spot checked a few OBS sites. From the point_stat
MPR files for DJF, I see:
Truth:
Borgo Cioffi 40.52375 14.95744 10 0 4.62984 0.42177
Tonzi Ranch 38.4316 -120.96598 10 0 -2.23139 1.7888
Grignon 48.84422 1.95191 10 0 -1.09105 1.2232
Output:
IT-BCi 40.5237 14.9574 10 0 4.62984 0.95305
US-Ton 38.4309 -120.966 10 0 -2.23139 2.91969
FR-Gri 48.8442 1.9519 10 0 -1.09105 0.717
What we can see is that: 1) FCST values match 2) OBS values do not match, but the sign is the same and the new values are higher for these three sites 3) The station metadata (lat/lon) matches
The OBS values being different here is almost certainly driven by differences in decisions about what quality of raw FLUXNET data to include in the TCI calculation, compared to whatever data the previous TCI provider chose to use. I'm not sure there's a way to quantify this, but rather just be confident in our data filtering and take the matching FCST values as an indicator the actual math for computing TCI remains unchanged.
@DanielAdriaansen Thanks for the testing and explanation of the results. I agree it seems like we're ok with the TCI computation. I also approve the PR.
Closes #2388.
Includes:
[ ] Describe testing already performed for these changes: I ran the use case locally and it succeeds, and it also succeds in GHA.
[ ] Recommend testing for the reviewer(s) to perform, including the location of input datasets, and any additional instructions:
[ ] Do these changes include sufficient documentation updates, ensuring that no errors or warnings exist in the build of the documentation? [Yes or No] Yes
[ ] Do these changes include sufficient testing updates? [Yes or No] Yes
[ ] Will this PR result in changes to the test suite? [Yes or No] If yes, describe the new output and/or changes to the existing output: Yes, the OBS input data are different and therefore the OBS values being used by MET are also different, which will result in differences in the test data. After the PR is merged, the test data will need to be updated.
[ ] Do these changes introduce new SonarQube findings? [Yes or No] If yes, please describe: Yes, but I resolved 3 code smells in the Python embedding script before SonarQube licensing issues cropped up, so ultimately there should be no new SonarQube findings.
[ ] Please complete this pull request review by [Fill in date]. At your earliest convenience.
Pull Request Checklist
See the METplus Workflow for details.