Closed recursion-ninja closed 3 years ago
Some specific things to test:
This library might be very useful. Worth exploring:
Currently AutoBench is not on hackage and is quite hard to build with our project (e.g. it uses an earlier version of megaparsec). I think it is still worth exploring in the future if a more robust version is developed.
Which version of megaparsec
is it using to build?
It has the bounds: megaparsec >= 6.2 && < 6.6,
*sigh*, I had updated to megaparsec >= 7.0 && < 8.0
for some reason, but I can't remember what it was.
If it looks promising, maybe I can make a PR to upgrade AutoBench
to use megaparsec >= 7.0
. The process was quite painless for me with our code base.
We should take the random TCM generation and tree/sequence data generation from utils/generate-tcm.hs
and utils/generate-data-set.hs
, respectively into testing modules. Should be useful in conjunction with AutoBench
.
We have since added several benchmarking suites, though not the ones outlined above. Closing as this issue's completion is "good enough" for now. We can make a new, more specific in the future for more benchmark targets.
Add a benchmarking suite for given functions. We can use
criterion
andweigh
to measure the run-time and memory usage respectively. Try to test time and space complexities using these libraries. If we can write our own abstraction for this, that would be great.Should also investigate how well parallelism is being utilized. Specifically strictness with data parallelism.