Closed trexfeathers closed 3 years ago
Merging #124 (5340ce6) into main (10820f2) will not change coverage. The diff coverage is
n/a
.
@@ Coverage Diff @@
## main #124 +/- ##
=======================================
Coverage 98.85% 98.85%
=======================================
Files 14 14
Lines 699 699
=======================================
Hits 691 691
Misses 8 8
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 10820f2...5340ce6. Read the comment docs.
I've so far made minimal changes to the benchmarks themselves, but given this PR forces saving of sample data (rather than just creating a Python object), I'd be interested in @stephenworsley's thoughts on better optimisation via ASV's setup_cache
.
Take this as a suggestion, not a review. You could refactor into either a closure or a class that wraps the python executable rather than hard coding to a global constant.
e.g.
class PythonRunner:
def __init__(self, python:Path):
self.python=python
def __call__(self, code, *args, **kwargs):
... code of run_elsewhere using self.python ...
# in the specific data gen code
run_elsewhere = PythonRunner(GEN_DATA_PYTHON)
or
def make_python_runner(python: Path):
def run_elsewhere(code, *args, **kwargs): ...
return run_elsewhere
run_elsewhere = make_python_runner(GEN_DATA_PYTHON)
You get the picture.
You could refactor into either a closure or a class that wraps the python executable rather than hard coding to a global constant.
If this needed to be more generic, sure. But that's engineering for something that I don't expect to happen: https://github.com/SciTools-incubator/iris-esmf-regrid/pull/124#discussion_r742927858
Have a read of
benchmarks/benchmarks/generate_data.py
- the docstrings provide detail on my thoughts for this.Summary: run generation scripts in an alternative environment that will therefore remain unchanged throughout the benchmark run.