Closed dan-blank closed 1 year ago
You can combine tasty-bench
with any other tasty
test provider within the same test/bench suite.
Thank you for your answer! I am still new to the tasty
-verse - Can you give me one more hint? How can I write a tasty
test provider that just returns a result (and stores this in a csv, beside the performance meassurements like cpu time spent)?
When I can do that, then together with the awk incantation that give here, I should have enough for my needs.
Hmm, it seems like I could use testCaseInfo :: TestName -> IO String -> TestTree
from tasty-hunit
in the bench suite and then use the --csv
flag to store it all within one csv file. I will try that!
Yes, got it to work! In retrospect, this issue should be titled "Let csvReporter also return the result of the benchmark", but together with the awk in the docs, I know have everything I need - thank you! :)
For future readers:
testCaseInfo :: TestName -> IO String -> TestTree
from tasty-hunit
I return the result of my functiontasty-stats
I print a CSV file that also contains the result (invoking tasty-bench
with --stats filename.csv
):
main :: IO ()
main =
defaultMainWithIngredients
(Test.Tasty.Stats.consoleStatsReporter : benchIngredients)
$ bgroup ...
What: I would like to use tasty-bench to not only measure performance metrics, but also custom-made metrics. My use case is that I am working on a hobby project where I work on an optimization problem. I would like to see the effect of different heuristics reflected in the benchmarks: "When using this heuristic, the solution produced is this or that good". It would be quite nice to be able to use the tasty-bench machinery (i.e. comparing between benchmark runs)!
Question: Is this something that makes sense for tasty-bench design-wise?
Here is a draft of how I image using that feature: