project-oak / oak

Meaningful control of data in distributed systems.
Apache License 2.0
1.29k stars 111 forks source link

Improve oak functions benchmarks #4706

Closed tiziano88 closed 1 month ago

tiziano88 commented 7 months ago

cc @waywardgeek

tiziano88 commented 7 months ago

cc @andrisaar

tiziano88 commented 7 months ago

@pmcgrath17

tiziano88 commented 7 months ago

I am modifying the existing bench.

Somewhat interesting results.

For an echo Wasm module:

test wasm::tests::bench_invoke_echo   ... bench:      88,085 ns/iter (+/- 6,912)

For a k/v lookup Wasm module that does a single lookup:

test wasm::tests::bench_invoke_lookup ... bench:     104,695 ns/iter (+/- 9,098)

If I increase the number of lookups to 100:

test wasm::tests::bench_invoke_lookup ... bench:   1,685,143 ns/iter (+/- 556,036)

And to 1,000:

test wasm::tests::bench_invoke_lookup ... bench:  15,497,293 ns/iter (+/- 584,814)

And to 10,000:

test wasm::tests::bench_invoke_lookup ... bench: 155,914,223 ns/iter (+/- 14,439,435)

So, it seems to scale roughly linearly, as we would expect naively.

But now I would like to see if @waywardgeek 's suggestion of introducing batching, but at the Wasm lookup logic level, makes things any better.