Closed MarionBWeinzierl closed 8 months ago
I think there could be two solutions:
profiling
in the root directory and move tests/pmodel/test_profiling.py to profiling/test_pmodel_profiling.py. And we also create profiling scripts for the T Model and the Hygro model, as in the following picture:Consequently, we need to update .github/workflows/pyrealm_ci.yaml to run the profiling separately (although this should be addressed later in issue #142), somewhat similar to this:
test:
...
steps:
- ...
- name: Profiling
if: github.ref == 'refs/heads/main'
run: poetry run pytest profiling/ --profile
conftest.py
in the tests/ folder with the following content:
This will make all classes or functions decorated by pytest.mark.profiling
skipped during testing unless the argument --profiling
is provided. An example of marking the tests in tests/pmodel/test_profiling.py would be
Consequently, we also need to update .github/workflows/pyrealm_ci.yaml later, somewhat similar to this:
test:
...
steps:
- ...
- name: Run tests with profiling
if: github.ref == 'refs/heads/main'
run: poetry run pytest --cov-report xml --profile
- name: Run tests without profiling
if: ${{github.ref != 'refs/heads/main'}}
run: poetry run pytest --cov-report xml
This creates two different commands to run conditional on whether the branch is main
.
The second solution seems more flexible, and we don't have to write the same code twice if we want to both test it and profile it. However, it profiles all tests when the argument --profile
is provided. I don't know if this is the intended behaviour. If we only want to profile certain tests contained in files like test_profiling.py
, the first solution may be better.
As described in #81 , the profiling should be pulled out of the general testing, and in particular the profiling directory, and given its own module. It should then be extended to not only cover the pmodel. We should consider how we want to potentially automate things like creating the call graph etc.
Tasks: