cncf-tags / green-reviews-tooling

Project Repository for the WG Green Reviews which is part of the CNCF TAG Environmental Sustainability
https://github.com/cncf/tag-env-sustainability/tree/main/working-groups/green-reviews
Apache License 2.0
23 stars 14 forks source link

[ACTION] Proposal 4: Benchmarking investigation #103

Open AntonioDiTuri opened 4 months ago

AntonioDiTuri commented 4 months ago

Task Description

Parent Issue https://github.com/cncf-tags/green-reviews-tooling/issues/83

This issue is about structuring the proposal about an investigation on the possible benchmarking strategies that we could choose.

Current state

What we did with Falco for our first benchmarking was let the end-user of the review (in this case the Falco project) choose their own benchmarking.

You can check the implementation details here there is a GitRepository ref to the repo that Falco set up to:

In short all the benchmarking is in the hands of the Project that wants the review and this might not be ideal for the future.

Please also note that this current setup is a mix of benchmarking techniques like stress-ng framework and synthtetic data generator (this is due to the nature of Kepler requirement on the simulation environment).

Desired state

There are a couple of arguments why we should structure this a bit differently.

Why?

Some open questions:

Some other considerations:

While it is good to have a standard approach with benchmarking, some projects (like Falco) might have some specific need for the benchmarking (e.g. Falco needed a given Kernel Event Rate to show production-like behavior).

No brainer answer might be:

But then we might fall into the case in which we don't have the same tests for all the projects. So what to do? This investigation proposal should produce a set of more fine grained investigation issues that could give us a direction for this investigation.

Goals to achieve

Nice to have