Open QuentinBisson opened 4 months ago
@T-Kukawka coming from https://github.com/giantswarm/giantswarm/issues/29551, here is a list of issues I found when reviewing already. Feel free to either created individual tickets or handle them here:
aws-load-balancer-controller
vintage aws only or will it also be deployed/deployable on CAPI? Because it it is, it's behind the provider=aws flag https://github.com/giantswarm/prometheus-rules/blob/4da1fd343f5e2c77f9ba6b91b57233634103320b/helm/prometheus-rules/templates/alerting-rules/aws-load-balancer-controller.rules.yml#L1EBS CSI
? https://github.com/giantswarm/prometheus-rules/blob/4da1fd343f5e2c77f9ba6b91b57233634103320b/helm/prometheus-rules/templates/alerting-rules/aws.workload-cluster.rules.yml#L79cluster-autoscaler
NATGatewaysPerVPCApproachingLimit
and ServiceUsageApproachingLimit
rely on aws-operator metrics.There might be others, but this should help you a bit
Slightly adapted the scope of this issue to be more focused on what's really important at this stage. :) Sorry for the closing and reopening.
Towards https://github.com/giantswarm/roadmap/issues/3312
Atlas is planning to migrate our monitoring setup to mimir targetting CAPI only. This will result in all data being in a single database, instead of the current one-prometheus-per-cluster setup. Current alerts have to be updated as queries will see all data for all clusters, MC and WC alike, instead of data for one specific cluster at a time.
We already did a lot of work towards this on the current alerts (removed a lot of deprecated alerts and providers, fixed alerts that clearly were not working an so on).
By doing so, we discovered a few things about Mimir itself but also that a chunk of our alert currently do not work on CAPI (e.g. based on vintage only components, deprecated and missing metrics an so on).
To ensure proper monitoring in CAPI and with Mimir, Atlas needs your help!
We would kindly ask all teams to help us out for the following use-cases, ordered in terms of priorities if they can't be performed all at once.
0. Create kickoff meetings for each teams
1. Test and fix your teams alerts and dashboards on CAPI clusters.
A lot of the alerts we have do not work on CAPI (e.g.
cluster-autoscaler
,ebs-csi
andexternal-dns
) simply because they are flagged behind the "aws" provider only, or because they rely on metrics of vintage components (cluster_created|upgraded inhibitions). The specific alerts issue that were identified will be added to the team issues.2. Test and fix your teams alerts and dashboards on Mimir.
We currently have Mimir deployed on Golem for testing of alerts accessible as a datasource in grafana.
Current known/unknown with Mimir are behing written here by @giantswarm/team-atlas but feel free to add what you found.
We request a second round of testing for Mimir because Mimir in inherently different from our vintage monitoring setup. First,all metrics will be stored in one central place (we are not enabling multi-tenancy yet). This means that:
by
clauseon
clauseabsent
function should be use carefully because this function renders an empty vector so having it empty for all clusters in a MC seems relatively impossible. If you target 1 cluster in particular, this could work (cluster_type="management_cluster" for example but we think it's best to rely on other mechanisms)Second, for grafana cloud, we rely a lot on external labels (labels added by prometheus when metrics leave the cluster like installation, provider and so on) but data sent from mimir to grafana cloud will not have those external labels anymore so recording rules aggregations and join must contain all eternal labels in the on and by clauses (that was mostly done by atlas but please review)
Third, we know that the alerting link (prometheus query) in opsgenie and slack will not work directly because Mimir does not have a UI per se (hint: it's grafana). The only way to have this source link back is to migrate to mimir's alertmanager but that's a whole over beast that we cannot tacke right now so we advise you, for each alert, to try to find a dashboard can be linked to the alert to help with oncall.
3. Test Grafana Cloud dashboards with golem data
As mimir data will be sent to grafana cloud by a single prometheus with no external labels, we would like you to ensure the grafana cloud dashboard that your team owns work on golem.
This is currently blocked by https://github.com/giantswarm/roadmap/issues/3159
Further info:
To help you, you can always add alert tests in prometheus-rules, those are great :)