Open CLOVIS-AI opened 5 months ago
@CLOVIS-AI, could you please elaborate a bit on your use case?
As far as I know, there were no plans to make benchmark generation API public. @qurbonzoda can correct me if I'm wrong.
I'm writing a test framework that allows declaring tests declaratively. This is particularly convenient because it is a proper Kotlin DSL with no magic, which makes it much easier to reason about.
suite("My test suite") {
test("This is a normal test") {
// As you can see, you don't have to think about all the usual hacks,
// like runTest needing to be the returned line for KJS,
// or the backticks syntax not working on all platforms.
}
suite("A nested test suite") {
// No need to declare inner classes with a specific annotation…
// No need to remember what the correct annotation is
// for parameterized tests (it doesn't work on native anyway)
repeat(100) {
test("Test #$it/100") {
// …
}
}
// Actually, we can use Kotlin's features to go even further…
parameterized {
val str by parameterOf("1", "foo", "", "は")
val x by parameter(0.0, 1.0, 5.0, Math.PI, -1.0, Float.Infinity)
test("Test $str & $x") { … }
}
}
}
I would like to integrate kotlinx-benchmark into this framework.
For example, something like
@State(Scope.Benchmark)
class MyBenchmark {
private val size = 10
private val list = ArrayList<Int>()
@Setup
fun prepare() {
for (i in 0..<size) {
list.add(i)
}
}
@TearDown
fun cleanup() {
list.clear()
}
@Benchmark
fun benchmarkMethod(): Int {
return list.sum()
}
}
may become something like
benchmark("My benchmark title") {
val size = 10
val list = ArrayList<Int>()
setup {
for (i in 0..<size)
list.add(i)
}
tearDown {
list.clear()
}
execute {
list.sum()
}
}
In my opinion, this is much closer to what I expect from a modern Kotlin library.
To be clear, I'm not asking for kotlinx-benchmark to create this DSL! Just, each annotation must in some way be triggering something in the engine to declare the different things. I'd like this low-level API to be accessible, so other people would be able to write libraries on top of it. Currently, the only officially-supported way to integrate kotlinx-benchmark into a test framework like Prepared or Kotest would be have it generate Kotlin code with the annotations, and then running that…
The API exposed by kotlinx-benchmark could look something like:
val engine = BenchmarkEngine()
engine.registerSetup { … } // equivalent of @Setup
engine.registerTeardown { … } // equivalent of @Teardown
engine.registerBenchmark { … } // equivalent of @Benchmark
engine.execute() // blocks or suspends until all registered callbacks are executed, equivalent of what the thing declared to JUnit actually does
(since this would be low-level API that most users are not expected to use, it doesn't really matter how "beautiful" it is, as long as it allows to declaratively do anything annotations can, without needing to generate+compile+run fake Kotlin code)
This way, the library would expose enough flexibility to be used in other contexts / for other use-cases.
Hello! I would like to use this library as part of a DSL that I own, to be able to declare benchmarks dynamically.
However, the project currently requires declaring benchmarks with annotations.
Would it be possible to declare the function/functions in the runtime that are the equivalent of each annotation, such that it would be possible to use the algorithms etc of this library in real code, without needing compiled annotations?