Closed codefromthecrypt closed 11 months ago
sorry about the earlier bad commit message. It seems you cannot cite someone like @sanposhito
in the body. I know now!
I will rebase things and add copywrite headers when the other PRs are in!
@kerthcet @sanposhiho I would like to rebase this and have folks run bench before/after API affecting changes. Can we agree on this process? When changing the design we must do a performance before/after vs assuming changes are free?
ok ready. I added no-op which shows raw speed when nothing happens. Certain lifecycle that never read data can be very fast, but the main point here is to have something that shows we don't accidentally decode protos when they aren't used.
Updated with real node data contributed by @sanposhiho!
So, on my laptop, you can see the overhead of plugin execution (which reads this), is 1/4 millisecond, but almost nothing if the pod and node data isn't asked for. This is an important infrastructural step, as we can run these tests to make sure we don't accidentally eagerly fetch data.
I'll take a look later today or early tomorrow morning, sorry for the limited bandwidth.
@codefromthecrypt
I would like to rebase this and have folks run bench before/after API affecting changes. Can we agree on this process? When changing the design we must do a performance before/after vs assuming changes are free?
Agree with that. Since the performance is the critical part of this project, let's force people to run the bench on every major changes. Can you add a mention on the PR template?
ok @sanposhiho I think I got everything. Let me know if this needs to be squashed.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: codefromthecrypt, sanposhiho
The full list of commands accepted by this bot can be found here.
The pull request process is described here
@codefromthecrypt Yep, please squash.
It's a follow up topic after https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension/issues/8. It'd be nice if this perf test is automatically run in CI so that people don't need to run by themselves
so @kerthcet once this is merged, I'll help on two things:
I have work own the latter staged locally.
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Our benchmarks still aren't realistic even in simple case, because the inputs are not realistic.
This adds internal/e2e to allow us to benchmark with realistic data. This starts with real node and pod data contributed by @sanposhiho.
Which issue(s) this PR fixes:
NONE
Special notes for your reviewer:
Example run from my laptop.
Does this PR introduce a user-facing change?
NONE