naver / fixture-monkey

Let Fixture Monkey generate test instances including edge cases automatically
https://naver.github.io/fixture-monkey
Apache License 2.0
560 stars 89 forks source link

Module rearange and add kotlin benchmark #950

Closed jinia91 closed 6 months ago

jinia91 commented 6 months ago

Summary

I suggest some modifications to the current structure to facilitate benchmark testing of the fixture-monkey-kotlinmodule using JMH

  1. To ensure consistent results, use the same class and use the java-test-fixtures plugin for code reuse.

  2. make new module fixture-monkey-benchmarks for aggregating sub-benchmark-module.

  3. Due to some local compile issues when defining Java classes within the Kotlin module, move the ExpressionGeneratorJavaTestSpecs.java to a Java package in fixture-monkey-kotlin.

  4. Add default benchmark samples to the fixture-monkey-kotlin module.

    For a proper comparison, the sample test cases are planned as follows:
    
    1. Benchmark for initializing Java classes in Kotlin
    2. Benchmark for initializing Kotlin classes in Kotlin
    3. Benchmark for initializing composite classes with Kotlin-Java internal references (todo)
  5. Modify the GitHub Actions workflow for benchmark reporting.

How Has This Been Tested?

Local Build and Ci Pipline

Is the Document updated?

no need

seongahjo commented 6 months ago

How about adding a new module to benchmark?

jinia91 commented 6 months ago

How about adding a new module to benchmark?

Im considering two approaches

From the perspective of ~scalability~ Flexibility , choice second approach.

However, I'm not entirely confident due to the downside of increased dependency complexity.

jinia91 commented 6 months ago
fixture-monkey Benchmark Mode Threads Samples Score Score Error (99.9%) Unit
ManipulationBenchmark.fixed avgt 1 10 576.165762 2.457355 ms/op
ManipulationBenchmark.thenApply avgt 1 10 1301.278260 5.921343 ms/op
ObjectGenerationBenchmark.beanGenerateOrderSheetWithFixtureMonkey avgt 1 10 596.718850 5.575784 ms/op
ObjectGenerationBenchmark.builderGenerateOrderSheetWithFixtureMonkey avgt 1 10 505.365829 3.092013 ms/op
ObjectGenerationBenchmark.fieldReflectionGenerateOrderSheetWithFixtureMonkey avgt 1 10 572.670919 6.588839 ms/op
ObjectGenerationBenchmark.jacksonGenerateOrderSheetWithFixtureMonkey avgt 1 10 607.734411 4.826412 ms/op
fixture-monkey-kotlin Module Benchmark Mode Threads Samples Score Score Error (99.9%) Unit
KotlinObjectGenerationBenchMark.beanGenerateJavaOrderSheetWithFixtureMonkey avgt 1 10 1029.619206 21.428195 ms/op
KotlinObjectGenerationBenchMark.beanGenerateKotlinOrderSheetWithFixtureMonkey avgt 1 10 980.861308 14.237564 ms/op
seongahjo commented 6 months ago

I'm sure you've given it a lot of thought. Can you tell me about a specific situation where flexibility is critical? I have no idea so far. I'm not sure about it is worth than complex dependency.

jinia91 commented 6 months ago

Currently, the execution speed of performance bench check feels too slow, leading to a decrease in productivity.

With the current PR, as adding just two of kotlin benchmarking, there has been an increase of over 4 minutes.

As fixture-monkey continues to expand with more tests being added, i expect to take more times in future.

so, in near future, try to various pipelines such as:

are expected to be considered.

In such cases, big one module for benchmark might lead to break changes.

Of course, this is just speculative in my opinion, and the problems of increased dependency complexity and management points cannot be ignored.

seongahjo commented 6 months ago

Oh, I see, thank you for your detailed explanation. Yes, I agree that a large module leads to bad for performance, which is bad for productivity. Then how about a new module that contains sub benchmark modules like fixture-monkey-tests?

jinia91 commented 6 months ago

Oh, that sounds good.

There seems to be no need to place the benchmark testing module under the basic module structure unnecessarily. I will change the direction to manage under the fixture-monkey-benchmark each sub benchmark modules.

Thank you for the great idea.