Open m3adow opened 1 month ago
@m3adow Hello there! I'm here to help you with any bugs, questions, or contributions. Let's solve this together!
I found a similar unsolved discussion and several closed issues that might help you with your problem.
Similar Discussion:
Closed Issues:
alertmanagerConfigMatcherStrategy.type
to None
to avoid automatic namespace matchers.alertmanagerConfigMatcherStrategy.type
to None
resolved the confusion.AlertmanagerConfig
CRD with the spec.alertmanagerConfiguration
field and disable namespace enforcement using spec.alertmanagerConfigMatcherStrategy.type: None
.To achieve your goal of matching either namespace
or exported_namespace
labels, you can follow these steps:
Set alertmanagerConfigMatcherStrategy.type
to None
:
apiVersion: monitoring.coreos.com/v1
kind: Alertmanager
metadata:
name: alertmanager
spec:
alertmanagerConfigMatcherStrategy:
type: None
Configure AlertmanagerConfig
CRD:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: my-test
namespace: my-test
spec:
receivers:
- name: my-test
- name: "null"
route:
receiver: "null"
matchers:
- name: namespace
matchType: "!="
value: ""
routes:
- matchers:
- name: exported_namespace
matchType: "="
value: my-test
receiver: my-test
- matchers:
- name: namespace
matchType: "="
value: my-test
receiver: my-test
This configuration should help you match alerts based on either namespace
or exported_namespace
without cluttering the Alertmanager UI with unnecessary receivers.
To continue talking to Dosu, mention @dosu.
My bad, I made a copy & paste error in my manifest. Corrected the spec.route.receiver
to use my-test
. Apart from that @dosu s manifest is identical to mine.
But that way generates one entry in the alertmanager UI per alert and alertmanagerconfig. So seven alertmanagerconfigs and five alerts generate 35 entries. That's a lot of clutter.
Could you provide more details on the specific side effects or issues you encountered when trying other approaches? This might help in suggesting a more refined solution.
To continue talking to Dosu, mention @dosu.
GMP uses the exported_namespace label for the identification of the correct namespace to alert for. At the same time, we have other metrics which still use the namespace label for that.
Can't you fix GMP to use the namespace
label insted?
Can't you fix GMP to use the
namespace
label insted?
No, that's fixed. As I still couldn't find a sensible solution, I guess I will be creating two AlertmanagerConfig
CRDs per namespace, one for namespace
and one for exported_namespace
. This in addition to a alertmanagerConfigMatcherStrategy.type: None
setting and a thorough use of Kustomize and YAML anchors for the most possible DRYness is probably the least annoying solution.
So it means that for example, metrics from kube-state-metrics always have namespace="<kube-state-metrics namespace>"
and exported_namespace="<namespace of the resource>"
?
Correct. And that's the exact problem. While kube-state-metrics runs in one fixed namespace my platform team controls, an alert for a long running Cronjob should go to the application team responsible for the namespace.
What happened?
Description
I'm currently working on bringing our Google Managed Prometheus (GMP) alerts into our alertmanager instance which is deployed with Prometheus Operator. GMP uses the
exported_namespace
label for the identification of the correct namespace to alert for. At the same time, we have other metrics which still use thenamespace
label for that.Therefore, I need to find a way to match either of those labels in each
AlertmanagerConfig
our teams deploy in their namespaces. Preferrably with only oneAlertmanagerConfig
CRD.If I understood correctly, it's not possible to OR the
spec.route.matchers
. Additionally, anamespace
matcher is automatically added to eachAlertmanagerConfig
as long as the Alertmanager instance is configured withalertmanagerConfigMatcherStrategy.type: OnNamespace
which is the default as well. All my efforts until now either didn't work or had some very annoying side effects.My best approach right now is to:
alertmanagerConfigMatcherStrategy.type
toNone
AlertmanagerConfig
CRD to usenamespace != ""
asspec.route.matchers
item to prevent Watchdog alerts from matchingnamespace = "mynamespace"
exported_namespace = "mynamespace"
The problem is, that the
.spec.route.receiver
is triggered for every alert matching, which is a lot of course. Although the CRD description makes it sound like.spec.route.receiver
could be omitted, it's not possible. Therefore, I have to configure a "null" receiver for eachAlertmanagerConfig
CRD which is still shown in the Alertmanager UI cluttering the Overview with useless informationIs there any good way to achive what I want?
Prometheus Operator Version
Kubernetes Version
Kubernetes Cluster Type
GKE
How did you deploy Prometheus-Operator?
helm chart:prometheus-community/kube-prometheus-stack
Manifests