elastic / kibana

Your window into the Elastic Stack
https://www.elastic.co/products/kibana
Other
19.57k stars 8.09k forks source link

Regularly benchmarking and stress-testing the alerting framework and rule types #119845

Open mikecote opened 2 years ago

mikecote commented 2 years ago

The alerting system must be regularly benchmarked and stress-tested before every production release. Preferably mirroring known complex customer environments. This ensures we do not introduce any regressions by benchmarking and comparing key health metrics.

There are various ongoing performance testing & framework / tool creation efforts that relate to Kibana, some research has been done to ensure the pros/cons and applicability of each so we can invest where we see the best value proposition balanced with quickest roi we can get. As research continues it seems clear we'll plan to extend one or more tools or frameworks into a given solution. So, while we may start with one tool as an incremental first-step or as a starting point, we're developing this to a set of requirements, foremost.

Front-runner for starting-point tool/library: The Kibana Alerting team / ResponseOps kbn-alert-load Alert / Rule testing tool


Here are some of the WIP Requirements we are evaluating and building out:

Stretch / next goals:


FYI: Frameworks/Tools that have been researched and ruled out for immediate purposes:

  1. Kibana-QA team created an API load testing tool - kibana-load-testing. It was researched by Patrick M in 2020 and Alert/Rules team did not end up collaborating on it, it uses the Kibana HTTP API and so isn't best suited to assess the (background process) Task Manager at the moment

  2. Kibana Working group's coming tool - (including folks like Spencer A / Tyler S / Daniel M / Liza K - they are discussing and working on a performance testing tool and CI integration for Kibana needs.

    • Eric is bringing requirements / context and generally participating with the Kib Perf Working group (v2) to benefit both groups.
    • Their timeline is cited as TBD for when Kibana Task Manager centric automation support will be focused, the UI is where they are investing first (as of Feb 2022). This is partly done knowing that kbn-alert-load tool exists and is sufficient for teams (based on its usasge).
elasticmachine commented 2 years ago

Pinging @elastic/kibana-alerting-services (Team:Alerting Services)

alexfrancoeur commented 2 years ago

Dropping this in here, but if we aren't already talking to the rally team, we may be able to use the dataset from these upcoming tracks: https://github.com/elastic/rally-tracks/pull/222, https://github.com/elastic/apm-server/pull/6731

mikecote commented 2 years ago

I will remove this issue (and assignees) from our iteration plan for now, as we would like for @EricDavisX to pick this up in the coming weeks with the research that is done so far.

EricDavisX commented 2 years ago

I'm researching this and hoping to finish evaluating what usage the ResponseOps and Security side teams have done in the next few days. With that done I'll be able to come up with a list of requirements and then also a modest plan for what I'll do next/further here.

EricDavisX commented 2 years ago

Still researching the kbn-alert-load tool - thanks all for the help. Also Finishing a first draft of a requirements document that QA will assess (with Engineering too) - then we'll form a plan and adjust the bullet points above

EricDavisX commented 2 years ago

MLR-QA team is wrapping up a prototype jenkins job to run kbn-alert-load tool (while security team has a prototype done in build-kite, fyi!) - I'll post details in slack for RespOps team

EricDavisX commented 2 years ago

I can update where we are. we did a proof of concept in jenkins and have decided to continue iterating on it from the machine-learning-qa-infra jenkins server:

we've enhanced the jenkins run to always delete the ecctl deploys. we'll continue updating this periodically with progress.

EricDavisX commented 2 years ago

We have achieved an MVP that includes the checked metrics above, it runs nightly against several versions via cloud (CFT region) and reports pass/fail into our slack channel - I'm going to focus on other work, though may help drive QA implementing a few small remaining low-hanging fruit items.