Open AntonioDiTuri opened 7 months ago
Perfect, thank you @AntonioDiTuri! 🎉
PS: There might be relevant information that we can draw from the design doc (which has now been archived), especially from the section 2. Sustainability Assessment Pipeline and on. Some of the info in the design doc has been refactored and moved to our docs but that was mostly for the infrastructure sections, not the pipeline sections.
:+1: It is good that we are talking about how we can distribute the tasks among the contributors and how we can enable them to contribute. This is the current structure:
A suggestion from my side would be to look at this:
Hi Leo thanks for the input!
From what I got from the last meeting I though that those points:
Are more in scope for Q3, at least that's what I remember from our discussion - will update the roadmap accordingly.
About the naming: I don't like to name the first package Pipeline/Automation because to me the all derivable of Q2 sounds is a pipeline. In the context of the pipeline we have the 3 steps: deploy - run (or benchmark fine for me) - report
About reporting: It is true that the package might look smaller but I see no problem there, let's see the action proposal and let's move the discussion there in case
Are more in scope for Q3
Yes, agree. The other points are more important now (and we also need to have the automation in place to write documentaiton about it..) if we have time in Q2 that would be a good task otherwise Q3 material 👍
About the naming
+1
Switched me and niki on proposal 2 and 3. Added proposal 4. Clarified proposal 2
Note for the Report
proposal - the SRE metrics requested by the Falco team are listed here.
https://github.com/falcosecurity/cncf-green-review-testing/discussions/14#discussioncomment-8610132
Intro
Hi everybody! We are back from KubeCon and after the Retro it is time to plan the next Quarter:
During the meeting of the 10th April we outlined the priorities for the next Quarter.
Outcome
We will work on the Pipeline Automation of the Green Review for Falco. The pipeline will have 3 steps in the scope
Other than the pipeline automation we will also work on a fourth proposal:
This is one of the question that the investigation should answer, more details to follow in the tracking issue
We need a PR drafted for each of those proposal.
Todo
For each proposal we will need to:
Proposal 1 - Deploy
Leads: @rossf7 @dipankardas011
Proposal 2 - Run
Leads: @nikimanoledaki @locomundo
After deploying the Falco project, we need to run the benchmark tests. This proposal assumes that the benchmarking are the one already defined here. Look proposal 4 for better understanding why this is relevant. Objective of this proposal is to document how to integrate the trigger of the benchmark in the GitHub Action
Proposal 3 - Report
Leads: @AntonioDiTuri Chris Chinchilla
After deploying the project and running the benchmarking tests we need to collect the metrics. At the moment we are just reading the Kepler metrics through Prometheus. We need a long term solution, we also need to discuss if we are intrested in saving lower level metrics (cpu,memory,etc.).
Proposal 4 - Benchmarking investigation
Leads: TBD
While we work for the automation of the current solution for Falco we might also want to start the discussion of a standardized set of benchmarking tests we would like to run. It would be good to involve a benchmarking expert because we want to be sure to reproduce a meaningful scenario that reproduces meaningful metrics across different projects we will review
If you want to get involved please drop a comment below