Bodigrim / tasty-bench

Featherlight benchmark framework, drop-in replacement for criterion and gauge.
https://hackage.haskell.org/package/tasty-bench
MIT License
80 stars 11 forks source link

Custom, ad-hoc metrics? #42

Closed dan-blank closed 1 year ago

dan-blank commented 1 year ago

What: I would like to use tasty-bench to not only measure performance metrics, but also custom-made metrics. My use case is that I am working on a hobby project where I work on an optimization problem. I would like to see the effect of different heuristics reflected in the benchmarks: "When using this heuristic, the solution produced is this or that good". It would be quite nice to be able to use the tasty-bench machinery (i.e. comparing between benchmark runs)!

Question: Is this something that makes sense for tasty-bench design-wise?

Here is a draft of how I image using that feature:

{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE ExistentialQuantification #-}

module Demo where

import Control.DeepSeq (NFData)
import GHC.Generics (Generic)
import Test.Tasty.Bench (Benchmark, bench, defaultMain, nf)

data Problem = Problem
data Solution = Solution deriving (Generic, NFData)
problem1 :: Problem
problem1 = undefined

solveProblemInstance :: Problem -> Solution
solveProblemInstance = undefined

steps :: Solution -> Int
steps = undefined

customBench :: String -> [AdditionalDataKV] -> Benchmark
customBench = undefined

-- Ord, because the values should be comparable between benchmarks.
data AdditionalDataKV = forall a. Ord a => AdditionalDataKV String a

main :: IO ()
main =
    Test.Tasty.Bench.defaultMain
        [ bench "solveProblemInstance problem1" $ nf solveProblemInstance problem1
        , customBench
            "steps required in solution of 'solveProblemInstance problem1'"
            [AdditionalDataKV "numberOfSteps" (steps $ solveProblemInstance problem1)]
        ]
Bodigrim commented 1 year ago

You can combine tasty-bench with any other tasty test provider within the same test/bench suite.

dan-blank commented 1 year ago

Thank you for your answer! I am still new to the tasty-verse - Can you give me one more hint? How can I write a tasty test provider that just returns a result (and stores this in a csv, beside the performance meassurements like cpu time spent)?

When I can do that, then together with the awk incantation that give here, I should have enough for my needs.

dan-blank commented 1 year ago

Hmm, it seems like I could use testCaseInfo :: TestName -> IO String -> TestTree from tasty-hunit in the bench suite and then use the --csv flag to store it all within one csv file. I will try that!

dan-blank commented 1 year ago

Yes, got it to work! In retrospect, this issue should be titled "Let csvReporter also return the result of the benchmark", but together with the awk in the docs, I know have everything I need - thank you! :)


For future readers: