Closed surbhigoel77 closed 5 months ago
Current setup for pmodel
submodules with their unit tests:
test_c3c4competition
test_functions
test_calc_carbon_isotopes
test_memory_effect
, test_fast_slow_pmodel
, FastSlowPModel_JAMES (UT not available)test_fast_slow_scaler
We can add tests for the submodules for which UTs are not available.
@surbhigoel77 I'm not quite sure what to review here? I only see the one line.
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 95.23%. Comparing base (
7bf55ac
) to head (1ff0456
). Report is 44 commits behind head on develop.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@davidorme could you review the structure of the test_optimal_chi
and also highlight the bounds applicable to the input or output?
LGTM - very clean. There does seem to be duplication between the two files - is one going to be retired?
@davidorme thanks for pointing out. The one with the name test_calc_optimal_chi
is the outdated one and shall be removed.
@davidorme Could you share the soft-bound values for the inputs to OptimalChi
? I will include a unit testcase for the module based on these bounds.
@davidorme Could you share the soft-bound values for the inputs to
OptimalChi
? I will include a unit testcase for the module based on these bounds.
Hi @surbhigoel77 - so the rpmodel
regression tests generate a bunch of input values to PModelEnvironment
that cover a wide range of plausible input conditions, given in the ranges here:
If I run those through PModelEnvironment
then we get the following ranges for the "photosynthetic environment" variables (ca
, kmm
, gammastar
, ns_star
):
import json
import numpy as np
from pyrealm.pmodel import PModelEnvironment
data = json.load(open('../rpmodel/test_inputs.json'))
env = PModelEnvironment(
tc=np.array(data['tc_ar']),
co2=np.array(data['co2_ar']),
patm=np.array(data['patm_ar']),
vpd=np.array(data['vpd_ar'])
)
And hence
In [10]: env.kmm.min(), env.kmm.max()
Out[10]: (0.7300629340136395, 588.8274005285591)
In [11]: env.ca.min(), env.ca.max()
Out[11]: (15.363050478271433, 52.860853517068094)
In [12]: env.gammastar.min(), env.gammastar.max()
Out[12]: (0.133689580651389, 14.45454199349665)
In [13]: env.ns_star.min(), env.ns_star.max()
Out[13]: (0.6162019533770273, 6.347157204305028)
So broadly, kmm
is $(0,1000)$, ca
is $(0, 100)$, gammastar
is $(0, 30)$ and ns_star
is $(0, 10)$. Does that help?
Might be worth using that actual test dataset - would be a convenient standard if you need one and that way if we need to update ranges, it gets cascaded across multiple tests.
Thanks for sharing this @davidorme. Yes, using the actual test set is a decent idea.
But, what is the ideal response to crossing the soft bounds? Is there a different warning message that should pop up?
@davidorme could you review this PR?
There are 3 files to review as mentioned in the PR description. A couple of special notes:
test_OptimalChi
has a testcase for nan value check but is commented out as it would not let the github checks pass the file. The testcase gives assertion error.test_pmodelenvironment
has the testcases to check for photosynthesis variables' plausible out of bound values. I have used the available test dataset (tc, patm, co2) to verify the same and none of the variables (kmm, ns_star, gammastar, ca) have any out-of-bound values. (These testcases are linked to #191 )@davidorme Could you approve the changes that you requested? I will only be able to merge once you approve them. A new PR is to be opened by me for putting imports within testcases
Description
Fixes Issue #23 Fixes Issue #191
In this PR, we are adding new unit tests for the sub-modules in pmodel that do not have dedicated unit tests. These are the submodules that we are writing tests:
Current work in progress is for
optimal_chi
has following testcases:set_beta
setupchi
estimationsubclass
functioning for all methodsNaN
handlingCurrent work in progress is for
jmax_limitation
has following testcases:jmax_limitation
setupvalue error
for invalid methodCurrent work in progress is for
pmodel_environment
has following testcases:pmodel_environment
setuppmodel_environment
generating any abrupt values for 'kmm'pmodel_environment
generating any abrupt values for 'ns_star'pmodel_environment
generating any abrupt values for 'ca'pmodel_environment
generating any abrupt values for 'gammastar'Type of change
Key checklist
pre-commit
checks:$ pre-commit run -a
$ poetry run pytest
Further checks