prometheus / alertmanager

Prometheus Alertmanager
https://prometheus.io
Apache License 2.0
6.58k stars 2.14k forks source link

Alertmanager- matchers should able to handle conditions if label is not present . #3136

Open btwseeu78 opened 1 year ago

btwseeu78 commented 1 year ago

alertmanagerconfig:

global:
  resolve_timeout: 5m

receivers:
- name: "null"
- name: "blackhole"
- name: "msteams-platform_channel"
  webhook_configs:
    - send_resolved: true
      url: http://msteams.default.svc.cluster.local:2000/platform_channel
- name: "msteams-project_channel"
  webhook_configs:
    - send_resolved: true
      url: http://msteams.default.svc.cluster.local:2000/project_channel
- name: 'slack'
  slack_configs:
  - send_resolved: true
    username: "{{ template \"slack.default.username\" . }}"
    color: "{{ if eq .Status \"firing\" }}danger{{ else }}good{{ end }}"
    title: "[{{ .CommonLabels.namespace }}] {{ .CommonLabels.alertname }}"
    title_link: "{{ template \"slack.default.titlelink\" . }}"
    pretext: "{{ template \"slack.default.pretext\" . }}"
    text: "{{ range .Alerts }}{{ .Annotations.message }}\n          See {{ .GeneratorURL }}\n{{ end }}\n          See {{ .ExternalURL }}"
    footer: "{{ template \"slack.default.footer\" . }}"
    fallback: "{{ template \"slack.default.fallback\" . }}"
    callback_id: "{{ template \"slack.default.callbackid\" . }}"
    icon_emoji: "{{ if eq .Status \"firing\" }}:exclamation:{{ else }}:white_check_mark::{{ end }}"
    icon_url: "{{ template \"slack.default.iconurl\" . }}"
    channel: "#digital_cloud_platform_dev"

# Inhibition rules allow to mute a set of alerts given that another alert is
# firing.
# We use this to mute any warning-level notifications if the same alert is
# already critical.
inhibit_rules:
- source_match:
    severity: 'critical'
  target_match:
    severity: 'warning'
  # Apply inhibition if the alertname is the same.
  equal: ['alertname']

route:
  group_by:
  - alertname
  - namespace
  - job
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 12h
  receiver: "msteams-platform_channel"
  routes:
    - receiver: "null"
      match:
        alertname: "Watchdog"
    - receiver: "null"
      match:
        # Ignore alerts on kube-system because cluster is managed with gke
        namespace: "kube-system"
    - receiver: blackhole
      match:
        severity: "none"
    - receiver: "msteams-projecte_channel"
      matchers:
      - namespace =~ "^[a-z0-9]{3}-[0-9]{2,5}.*"
      continue: false
    - receiver: "msteams-prj-cert_channel"
      matchers:
      - host !~ "^[A-Za-z0-9]+.ope-test-fr$"

for receiver : msteams-prj-cert_channel , if host is : 'test..ope-test-fr' it should not be matched and should move to default channel. but if in some alert the label host is not present ,it also matching this regex: msteams-prj-cert_channel, logically that's true but we need something to make sure this does not happen as there should be many specified labels that is not present in some alerts. we can not use !~ as it considers blank labels also as same.

alertmanager app version: v0.24.0 routes are tested in : https://prometheus.io/webtools/alerting/routing-tree-editor testcase : {host="^[A-Za-z0-9]+.ope-test-fr"} works correctly. testcase: {namespace="monitoring" } routes it to msteams-prj-cert_channel

simonpasquier commented 1 year ago

If I understand correctly, you want this which ensures that the alert has a host label though it doesn't match the regexp.

...
    - receiver: "msteams-prj-cert_channel"
      matchers:
      - host !~ "^[A-Za-z0-9]+.ope-test-fr$"
      - host != ""