Closed sanupanji closed 11 months ago
Hi, For some reasons, I couldn't use the default 162 port on the sample SNMP server you are reusing.
Can you try the following:
Actually, I managed to reproduce using your exact command-line. Haven't noticed it, but you should not add the leading 0x
from the engine IDs:
- --snmp.context-engine-id=8000000001020304
- --snmp.security-engine-id=8000000001020304
Thank @maxwo its working now, I can see the traps in snmp server. But 2 things:
- --alert.severities="dummy,critical,high,medium,low,warning,info"
192.168.216.59 - - [21/Feb/2023:11:15:14 +0000] "POST /alerts HTTP/1.1" 200 0
Logs should include the full header and description in debug mode.
About your points:
dummy
alert severities somewhere in your prometheus alert configurations. Can you describe precisely what is not working?I have only below severities
critical,high,medium,low,warning,info
But the first value I am passing in - --alert.severities="critical,high,medium,low,warning,info"
is not picked up by notifier
Getting below error in logs:
ts=2023-02-28T06:56:00.111Z caller=http_server.go:132 level=error status=400 statustext="Bad Request" err="incorrect severity: critical" data="unsupported value type"
so I am adding "dummy" in the first position as a workaround
- --alert.severities="dummy,critical,high,medium,low,warning,info"
Can you try sending these alerts:
{
"receiver": "snmp-notifier",
"status": "firing",
"groupLabels": {
"environment": "production",
"label": "test"
},
"alerts": [
{
"status": "firing",
"labels": {
"severity": "warning",
"alertname": "TestAlert",
"oid": "1.3.6.1.4.1.666.0.10.1.1.1.2.1"
},
"annotations": {
"summary": "this is the random summary",
"description": "this is the description of alert 1"
}
},
{
"status": "resolved",
"labels": {
"severity": "warning",
"alertname": "TestAlert",
"oid": "1.3.6.1.4.1.666.0.10.1.1.1.1.1"
},
"annotations": {
"summary": "this is the random summary",
"description": "this is the description of ActiveMQ alert"
}
},
{
"status": "firing",
"labels": {
"severity": "critical",
"alertname": "TestAlert",
"oid": "1.3.6.1.4.1.666.0.10.1.1.1.2.1"
},
"annotations": {
"summary": "this is the summary",
"description": "this is the description on job1"
}
},
{
"status": "resolved",
"labels": {
"severity": "critical",
"alertname": "TestAlert",
"oid": "1.3.6.1.4.1.666.0.10.1.1.1.2.1"
},
"annotations": {
"summary": "this is the summary",
"description": "this is the description on TestAlertWithoutOID"
}
}
]
}
Thanks to such a command: curl -XPOST http://localhost:9464/alerts -H 'Content-Type: application/json' --data '@alerts.json'
?
I successfully received them with no error and this command line:
./snmp_notifier --log.level=debug \
--alert.severities="critical,high,medium,low,warning,info" \
--alert.default-severity="high" \
--snmp.version=V3 \
--snmp.authentication-enabled \
--snmp.authentication-protocol=SHA \
--snmp.authentication-username=snmp_user_v3 \
--snmp.authentication-password=auth_password_v3 \
--snmp.private-enabled \
--snmp.private-protocol=AES \
--snmp.private-password=encrypt_password_v3 \
--snmp.security-engine-id=8000000001020304 --snmp.context-name=''
Which should look a lot like yours.
Hi!
I have a similar behavior as @sanupanji regarding the severities. Since it's deployed on Openshift 4.10 and ocp does implement the watchdog/dead man's switch with severity "none" I need to extend the severites.
This configuration
command:
- /bin/snmp_notifier
- >-
--snmp.trap-description-template=/etc/snmp_notifier/description-template.tpl
- '--snmp.destination=192.168.1.1:162'
- '--web.listen-address=:8080'
- '--alert.severities="critical,warning,info,none"' # <
- '--log.level=debug'
throws errors as soon as a critical alert is raised
s=2023-03-21T09:43:22.386Z caller=http_server.go:132 level=error status=400 statustext="Bad Request" err="incorrect severity: critical" data="unsupported value type"
Adding the dummy resolved the issue:
- '--alert.severities="dummy,critical,warning,info,none"'
Best regars Philip
I think the problem comes from the command-line itself:
- '--alert.severities="emergency,high,medium,low,critical,warning,info"'
Leads to the following severity list in the notifier (note the presence of "
):
["emergency high medium low critical warning info"]
--alert.severities="emergency,high,medium,low,critical,warning,info"
, without the leading quotes, seems to correctly parse the value, as well as '--alert.severities=emergency,high,medium,low,critical,warning,info'
:
[emergency high medium low critical warning info]
Can you confirm that? I will update the documentation to avoid such confusions.
Closing, as there is no more answers, and a solution has been found.
What did you do? I am using Prometheus, Grafana, Alertmanager stack to monitor my EDB postgres installed in bare metal k8s cluster.
I am receiving alerts in alert manager with below configuration.
In alertmanager URL
Now I want to reciver those alerts in a snmp trap server. So I have installed snmp_notifier with latest image. SNMP notifier command line:
I have added some custom severities as per my PrometheusRule setup
In
--snmp.destination
I added the IP of the snmp-server service. Which I installed in same namespace using your given example https://github.com/maxwo/snmp_notifier/blob/main/scripts/kubernetes/snmp-server.yaml without any change in given yaml.In snmp_notifier pod log getting 200 in response, though requesting header not showing
But in snmp-server nothing is coming
kubectl logs of snmp-server
What did you expect to see? I am expecting the trap details to be shown in snmp-server logs, also in snmp notifier logs.
Environment Bare metal K8s cluster, kubernetes version 1.25
System information:
Linux 4.18.0-372.9.1.el8.x86_64 x86_64
SNMP notifier version:
1.4.0
Alertmanager version:
0.23.0
Alertmanager command line: