Closed jinia91 closed 6 months ago
How about adding a new module to benchmark?
How about adding a new module to benchmark?
Im considering two approaches
From the perspective of ~scalability~ Flexibility , choice second approach.
However, I'm not entirely confident due to the downside of increased dependency complexity.
fixture-monkey Benchmark | Mode | Threads | Samples | Score | Score Error (99.9%) | Unit |
---|---|---|---|---|---|---|
ManipulationBenchmark.fixed | avgt | 1 | 10 | 576.165762 | 2.457355 | ms/op |
ManipulationBenchmark.thenApply | avgt | 1 | 10 | 1301.278260 | 5.921343 | ms/op |
ObjectGenerationBenchmark.beanGenerateOrderSheetWithFixtureMonkey | avgt | 1 | 10 | 596.718850 | 5.575784 | ms/op |
ObjectGenerationBenchmark.builderGenerateOrderSheetWithFixtureMonkey | avgt | 1 | 10 | 505.365829 | 3.092013 | ms/op |
ObjectGenerationBenchmark.fieldReflectionGenerateOrderSheetWithFixtureMonkey | avgt | 1 | 10 | 572.670919 | 6.588839 | ms/op |
ObjectGenerationBenchmark.jacksonGenerateOrderSheetWithFixtureMonkey | avgt | 1 | 10 | 607.734411 | 4.826412 | ms/op |
fixture-monkey-kotlin Module Benchmark | Mode | Threads | Samples | Score | Score Error (99.9%) | Unit |
KotlinObjectGenerationBenchMark.beanGenerateJavaOrderSheetWithFixtureMonkey | avgt | 1 | 10 | 1029.619206 | 21.428195 | ms/op |
KotlinObjectGenerationBenchMark.beanGenerateKotlinOrderSheetWithFixtureMonkey | avgt | 1 | 10 | 980.861308 | 14.237564 | ms/op |
I'm sure you've given it a lot of thought. Can you tell me about a specific situation where flexibility is critical? I have no idea so far. I'm not sure about it is worth than complex dependency.
Currently, the execution speed of performance bench check feels too slow, leading to a decrease in productivity.
With the current PR, as adding just two of kotlin benchmarking, there has been an increase of over 4 minutes.
As fixture-monkey continues to expand with more tests being added, i expect to take more times in future.
so, in near future, try to various pipelines such as:
are expected to be considered.
In such cases, big one module for benchmark might lead to break changes.
Of course, this is just speculative in my opinion, and the problems of increased dependency complexity and management points cannot be ignored.
Oh, I see, thank you for your detailed explanation.
Yes, I agree that a large module leads to bad for performance, which is bad for productivity.
Then how about a new module that contains sub benchmark modules like fixture-monkey-tests
?
Oh, that sounds good.
There seems to be no need to place the benchmark testing module under the basic module structure unnecessarily.
I will change the direction to manage under the fixture-monkey-benchmark
each sub benchmark modules.
Thank you for the great idea.
Summary
I suggest some modifications to the current structure to facilitate benchmark testing of the
fixture-monkey-kotlin
module using JMHTo ensure consistent results, use the same class and use the
java-test-fixtures
plugin for code reuse.make new module
fixture-monkey-benchmarks
for aggregating sub-benchmark-module.Due to some local compile issues when defining Java classes within the Kotlin module, move the
ExpressionGeneratorJavaTestSpecs.java
to a Java package infixture-monkey-kotlin
.Add default benchmark samples to the
fixture-monkey-kotlin
module.Modify the GitHub Actions workflow for benchmark reporting.
How Has This Been Tested?
Local Build and Ci Pipline
Is the Document updated?
no need