Run various benchmarks which can lead to quantifiable data established every release, in this case, establish and record various benchmarks for the v0.14.0 "jack-o'-lantern" release (https://github.com/lily-seabreeze/sappho/milestone/1). The desired effect is having a base to compare optimizations against, or any refactoring at all.
Open to discussion; hoping to hear approaches. A few I can think of:
Use something like pycallgraph for a single frame, or many frames averaged out (I think it may do this automagically)
Run various benchmarks which can lead to quantifiable data established every release, in this case, establish and record various benchmarks for the v0.14.0 "jack-o'-lantern" release (https://github.com/lily-seabreeze/sappho/milestone/1). The desired effect is having a base to compare optimizations against, or any refactoring at all.
Open to discussion; hoping to hear approaches. A few I can think of:
pycallgraph
for a single frame, or many frames averaged out (I think it may do this automagically)demo.py
tests
to determine benchmarks