Closed ImranJMughal closed 7 years ago
Can you post the entire output you see on the shim (the payload in addition to the 405)?
I dont see any payloads...just 405 logs...here's just a sample.
2017-03-16 12:13:01,550 INFO 10.90.17.229 - - [16/Mar/2017 12:13:01] "POST /endpoint/slack/103cbeb7-31cf-4081-8610-ba8f4c156faf HTTP/1.1" 405 - 2017-03-16 12:13:01,559 INFO 10.90.17.229 - - [16/Mar/2017 12:13:01] "POST /endpoint/slack/aca05696-5fd4-4c69-bb39-3a4d4833ecd4 HTTP/1.1" 405 - 2017-03-16 12:13:01,564 INFO 10.90.17.229 - - [16/Mar/2017 12:13:01] "POST /endpoint/slack/6721d4ad-5537-48ff-bb09-538062e16845 HTTP/1.1" 405 - 2017-03-16 12:13:01,564 INFO 10.90.17.229 - - [16/Mar/2017 12:13:01] "POST /endpoint/slack/f71b8a0e-74f1-4ddf-990c-fd55579b1cd1 HTTP/1.1" 405 - 2017-03-16 12:13:01,569 INFO 10.90.17.229 - - [16/Mar/2017 12:13:01] "POST /endpoint/slack/679b86f0-34d9-4f75-bbf6-84d70e75064f HTTP/1.1" 405 - 2017-03-16 12:16:51,611 INFO 10.90.17.229 - - [16/Mar/2017 12:16:51] "POST /endpoint/slack/f1fbc6db-55c1-48ac-8e6d-98ca34d9cba4 HTTP/1.1" 405 - 2017-03-16 12:19:31,637 INFO 10.90.17.229 - - [16/Mar/2017 12:19:31] "POST /endpoint/slack/dcba950a-c116-44e5-bd4e-008f3284e530 HTTP/1.1" 405 -
Can i increase the logging setting?
I do see a payload for the OK Posts (200s)
017-03-16 12:21:36,710 INFO Parsed={'status': u'CANCELED', 'criticality': u'ALERT_CRITICALITY_LEVEL_CRITICAL', 'editurl': '', 'Messages': '', 'subType': u'ALERT_SUBTYPE_AVAILABILITY_PROBLEM', 'Health': 4, 'AlertName': u'Notification event', 'resourceName': u'annsxman01', 'moreinfo': u'Notification event\n\n30062', 'hookName': 'vRealize Operations Manager', 'color': 'green', 'icon': 'http://blogs.vmware.com/management/files/2016/09/vrops-256.png', 'info': u'30062', 'Risk': 1, 'url': '', 'fields': [{'content': '4', 'name': 'Health'}, {'content': '1', 'name': 'Risk'}, {'content': '1', 'name': 'Efficiency'}, {'content': u'annsxman01', 'name': 'Resouce Name'}, {'content': u'VMWARE', 'name': 'Adapter Kind'}, {'content': u'ALERT_TYPE_APPLICATION_PROBLEM', 'name': 'Type'}, {'content': u'ALERT_SUBTYPE_AVAILABILITY_PROBLEM', 'name': 'Sub Type'}], 'Efficiency': 1, 'adapterKind': u'VMWARE', 'type': u'ALERT_TYPE_APPLICATION_PROBLEM'} 2017-03-16 12:21:36,712 INFO URL=https://hooks.slack.com/services//xxxx/xxxx/xxxx 2017-03-16 12:21:36,712 INFO Headers={'Content-type': 'application/json'} 2017-03-16 12:21:36,712 INFO Body={"attachments": [{"pretext": "Notification event\n\n30062"}, {"color": "info", "fallback": "Alert details", "fields": [{"short": true, "title": "Health", "value": "4"}, {"short": true, "title": "Risk", "value": "1"}, {"short": true, "title": "Efficiency", "value": "1"}, {"short": true, "title": "Resouce Name", "value": "annsxman01"}, {"short": true, "title": "Adapter Kind", "value": "VMWARE"}, {"short": false, "title": "Type", "value": "ALERT_TYPE_APPLICATION_PROBLEM"}, {"short": false, "title": "Sub Type", "value": "ALERT_SUBTYPE_AVAILABILITY_PROBLEM"}], "text": "Alert details"}, {"color": "danger", "fields": [{}]}], "icon_url": "http://blogs.vmware.com/management/files/2016/09/vrops-256.png", "username": "vRealize Operations Manager"} 2017-03-16 12:21:36,713 INFO Check=True 2017-03-16 12:21:36,714 DEBUG Starting new HTTPS connection (1): hooks.slack.com 2017-03-16 12:21:37,135 DEBUG https://hooks.slack.com:443 "POST /services/xxxx/xxxx/xxxx HTTP/1.1" 200 22
(i've altered the slack URL to xxx/xxx/xxx)
Change vROps to send to /endpoint/test and then paste the output here
This is all i get.
2017-03-16 17:17:35,316 INFO 10.90.17.229 - - [16/Mar/2017 17:17:35] "POST /endpoint/test/00b658e9-79b6-4e6f-97f7-ecf602703c22 HTTP/1.1" 405 - 2017-03-16 17:17:35,316 INFO 10.90.17.229 - - [16/Mar/2017 17:17:35] "POST /endpoint/test/00b658e9-79b6-4e6f-97f7-ecf602703c22 HTTP/1.1" 405 - 2017-03-16 17:17:35,318 INFO 10.90.17.229 - - [16/Mar/2017 17:17:35] "POST /endpoint/test/c2464add-e15d-409f-a532-e566468ece2c HTTP/1.1" 405 - 2017-03-16 17:17:35,318 INFO 10.90.17.229 - - [16/Mar/2017 17:17:35] "POST /endpoint/test/c2464add-e15d-409f-a532-e566468ece2c HTTP/1.1" 405 - 2017-03-16 17:17:35,325 INFO 10.90.17.229 - - [16/Mar/2017 17:17:35] "POST /endpoint/test/3afd6aef-7990-4ff9-a984-1e2876ee3d87 HTTP/1.1" 405 - 2017-03-16 17:17:35,325 INFO 10.90.17.229 - - [16/Mar/2017 17:17:35] "POST /endpoint/test/3afd6aef-7990-4ff9-a984-1e2876ee3d87 HTTP/1.1" 405 -
Sorry about that, edit loginsightwebhookshim/__init.py and change PUT to POST on line 229 then try again. I will address this as well once we figure out what is going on.
Sorry for the delay...this is what i get when i change the __init.py file
2017-03-21 08:47:50,107 INFO {"updateDate":1490086066310,"resourceId":"3d39fd26-5be4-44c4-b442-c2c5a40f7c6c","adapterKind":"NSX","Health":1,"impact":"risk","criticality":"ALERT_CRITICALITY_LEVEL_IMMEDIATE","Risk":3,"resourceName":"10.90.17.222","type":"ALERT_TYPE_NETWORK_PROBLEM","resourceKind":"ControllerCluster","alertName":"ET-NSXControllerClusterNodesTooFew","Efficiency":1,"subType":"ALERT_SUBTYPE_AVAILABILITY_PROBLEM","alertId":"94da9e63-caf4-4806-80ce-74a1e943bdfa","startDate":1490086066310,"info":"There are fewer than three controllers communicating with the NSX Manager, which may cause provisioning operations to fail","status":"ACTIVE"} 2017-03-21 08:47:50,107 INFO {"updateDate":1490086066310,"resourceId":"3d39fd26-5be4-44c4-b442-c2c5a40f7c6c","adapterKind":"NSX","Health":1,"impact":"risk","criticality":"ALERT_CRITICALITY_LEVEL_IMMEDIATE","Risk":3,"resourceName":"10.90.17.222","type":"ALERT_TYPE_NETWORK_PROBLEM","resourceKind":"ControllerCluster","alertName":"ET-NSXControllerClusterNodesTooFew","Efficiency":1,"subType":"ALERT_SUBTYPE_AVAILABILITY_PROBLEM","alertId":"94da9e63-caf4-4806-80ce-74a1e943bdfa","startDate":1490086066310,"info":"There are fewer than three controllers communicating with the NSX Manager, which may cause provisioning operations to fail","status":"ACTIVE"} 2017-03-21 08:47:50,109 INFO 10.90.17.229 - - [21/Mar/2017 08:47:50] "POST /endpoint/test/94da9e63-caf4-4806-80ce-74a1e943bdfa HTTP/1.1" 200 - 2017-03-21 08:47:50,109 INFO 10.90.17.229 - - [21/Mar/2017 08:47:50] "POST /endpoint/test/94da9e63-caf4-4806-80ce-74a1e943bdfa HTTP/1.1" 200 - 2017-03-21 08:47:50,110 INFO {"updateDate":1490086066310,"resourceId":"3279c128-9d4c-4ede-a3ea-6bae080faebe","adapterKind":"NSX","Health":1,"impact":"risk","criticality":"ALERT_CRITICALITY_LEVEL_CRITICAL","Risk":4,"resourceName":"10.90.17.222","type":"ALERT_TYPE_NETWORK_PROBLEM","resourceKind":"Manager","alertName":"ET-NSXManagerHardeningViolation","Efficiency":1,"subType":"ALERT_SUBTYPE_COMPLIANCE_PROBLEM","alertId":"0ab6e837-4b92-4ae1-88d0-8349fb33c1b4","startDate":1490086066310,"info":"The NSX Security Hardening Guide provide prescriptive guidance for customers on how to deploy and operate VMware products in a secure manner.","status":"ACTIVE"} 2017-03-21 08:47:50,110 INFO {"updateDate":1490086066310,"resourceId":"3279c128-9d4c-4ede-a3ea-6bae080faebe","adapterKind":"NSX","Health":1,"impact":"risk","criticality":"ALERT_CRITICALITY_LEVEL_CRITICAL","Risk":4,"resourceName":"10.90.17.222","type":"ALERT_TYPE_NETWORK_PROBLEM","resourceKind":"Manager","alertName":"ET-NSXManagerHardeningViolation","Efficiency":1,"subType":"ALERT_SUBTYPE_COMPLIANCE_PROBLEM","alertId":"0ab6e837-4b92-4ae1-88d0-8349fb33c1b4","startDate":1490086066310,"info":"The NSX Security Hardening Guide provide prescriptive guidance for customers on how to deploy and operate VMware products in a secure manner.","status":"ACTIVE"} 2017-03-21 08:47:50,111 INFO 10.90.17.229 - - [21/Mar/2017 08:47:50] "POST /endpoint/test/0ab6e837-4b92-4ae1-88d0-8349fb33c1b4 HTTP/1.1" 200 - 2017-03-21 08:47:50,111 INFO 10.90.17.229 - - [21/Mar/2017 08:47:50] "POST /endpoint/test/0ab6e837-4b92-4ae1-88d0-8349fb33c1b4 HTTP/1.1" 200 - 2017-03-21 08:47:50,139 INFO {"updateDate":1490086066310,"resourceId":"4e05df4c-30d4-474e-9414-a98c561729ca","adapterKind":"NSX","Health":4,"impact":"health","criticality":"ALERT_CRITICALITY_LEVEL_CRITICAL","Risk":1,"resourceName":"NSX-CN002","type":"ALERT_TYPE_NETWORK_PROBLEM","resourceKind":"Controller","alertName":"ET-NSXManagerControllerDown","Efficiency":1,"subType":"ALERT_SUBTYPE_AVAILABILITY_PROBLEM","alertId":"df65f875-593e-4c65-ae91-4037e4e8f42d","startDate":1490086066310,"info":"The NSX Controller is not communicating with the Manager","status":"ACTIVE"} 2017-03-21 08:47:50,139 INFO {"updateDate":1490086066310,"resourceId":"4e05df4c-30d4-474e-9414-a98c561729ca","adapterKind":"NSX","Health":4,"impact":"health","criticality":"ALERT_CRITICALITY_LEVEL_CRITICAL","Risk":1,"resourceName":"NSX-CN002","type":"ALERT_TYPE_NETWORK_PROBLEM","resourceKind":"Controller","alertName":"ET-NSXManagerControllerDown","Efficiency":1,"subType":"ALERT_SUBTYPE_AVAILABILITY_PROBLEM","alertId":"df65f875-593e-4c65-ae91-4037e4e8f42d","startDate":1490086066310,"info":"The NSX Controller is not communicating with the Manager","status":"ACTIVE"} 2017-03-21 08:47:50,140 INFO 10.90.17.229 - - [21/Mar/2017 08:47:50] "POST /endpoint/test/df65f875-593e-4c65-ae91-4037e4e8f42d HTTP/1.1" 200 - 2017-03-21 08:47:50,140 INFO 10.90.17.229 - - [21/Mar/2017 08:47:50] "POST /endpoint/test/df65f875-593e-4c65-ae91-4037e4e8f42d HTTP/1.1" 200 -
This pull request should fix your issue: https://github.com/vmw-loginsight/webhook-shims/pull/29
I still get post 405 errors when an alert is triggered in vrops. When the alarm clears i do get the alert into slack with a 200 ok message.
2017-03-29 10:34:08,013 INFO 10.139.54.54 - - [29/Mar/2017 10:34:08] "PUT /endpoint/slack/4b8171a8-d014-4f03-8e6c-7bb3b04924fd HTTP/1.1" 200 - 2017-03-29 10:34:13,678 INFO 10.139.54.54 - - [29/Mar/2017 10:34:13] "POST /endpoint/slack/d2bc1520-b5f3-428b-bbd5-3a7db6400580 HTTP/1.1" 405 -
The 405 error uses a POST, and the 200 a PUT.
Pull request #29 was merged today -- did you test with it?
I've tested with the branched files - let me retest by re-creating a new instance on a fresh VM. I'll get back to you asap
Hi, I try shim. it's awesome.
Anyway, I tested last version. but it still not working. I use slack. It works Log Insight and vROPS for canceled.
If I changed /endpoint/test outbound setting 2017-04-01 16:33:04,840 INFO {"updateDate":1491064378259,"resourceId":"4b5fb071-55e0-48f9-a734-4fba1ec8cf95","adapterKind":"VMWARE","Health":4,"impact":"health","criticality":"ALERT_CRITICALITY_LEVEL_CRITICAL","Risk":1,"resourceName":"KD-vROPS01","type":"ALERT_TYPE_APPLICATION_PROBLEM","resourceKind":"VirtualMachine","alertName":"[TEST] CPU 사용율 10% 이상","Efficiency":1,"subType":"ALERT_SUBTYPE_PERFORMANCE_PROBLEM","alertId":"5dce8e25-2e28-4bfe-81fe-93771256420d","startDate":1491064378259,"info":"CPU 사용율이 높아요~","status":"ACTIVE"} 2017-04-01 16:33:04,847 INFO 10.10.0.36 - - [01/Apr/2017 16:33:04] "POST /endpoint/test/5dce8e25-2e28-4bfe-81fe-93771256420d HTTP/1.1" 200 -
then, When I use slack, I clicked TEST 2017-04-01 16:36:45,155 INFO 10.10.0.36 - - [01/Apr/2017 16:36:45] "POST /endpoint/slack/test HTTP/1.1" 405 - 2017-04-01 16:36:45,159 INFO Parsed={'status': u'ACTIVE', 'criticality': u'ALERT_CRITICALITY_LEVEL_INFO', 'editurl': '', 'Messages': '', 'subType': u'ALERT_SUBTYPE_SMART_KPI_BREACH', 'Health': 0, 'AlertName': u'', 'resourceName': u'test', 'moreinfo': u'\n\ntest', 'hookName': 'vRealize Operations Manager', 'color': 'gray', 'icon': 'http://blogs.vmware.com/management/files/2016/09/vrops-256.png', 'info': u'test', 'Risk': 0, 'url': '', 'fields': [{'content': u'test', 'name': 'Object Name'}, {'content': u'ACTIVE', 'name': 'Status'}], 'Efficiency': 0, 'adapterKind': u'test', 'type': u'ALERT_TYPE_TIER'} 2017-04-01 16:36:45,166 INFO URL=https://hooks.slack.com/services/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 2017-04-01 16:36:45,168 INFO Headers={'Content-type': 'application/json'} 2017-04-01 16:36:45,169 INFO Body={"attachments": [{"pretext": "\n\ntest"}, {"color": "info", "fallback": "Alert details", "fields": [{"short": true, "title": "Object Name", "value": "test"}, {"short": true, "title": "Status", "value": "ACTIVE"}], "text": ""}], "icon_url": "http://blogs.vmware.com/management/files/2016/09/vrops-256.png", "username": "vRealize Operations Manager"} 2017-04-01 16:36:45,173 INFO Check=True 2017-04-01 16:36:45,177 DEBUG Starting new HTTPS connection (1): hooks.slack.com 2017-04-01 16:36:45,904 DEBUG https://hooks.slack.com:443 "POST /services/T3U9PFQ0Z/B4RRRUCG6/AkInetJCjcMJBKhg02anO2sk HTTP/1.1" 200 22 2017-04-01 16:36:45,911 INFO 10.10.0.36 - - [01/Apr/2017 16:36:45] "PUT /endpoint/slack/test HTTP/1.1" 200 -
and i canceled alert. 2017-04-01 16:41:05,001 INFO Parsed={'status': u'CANCELED', 'criticality': u'ALERT_CRITICALITY_LEVEL_CRITICAL', 'editurl': '', 'Messages': '', 'subType': u'ALERT_SUBTYPE_PERFORMANCE_PROBLEM', 'Health': 4, 'AlertName': u'[TEST] CPU \uc0ac\uc6a9\uc728 10% \uc774\uc0c1', 'resourceName': u'VDI-dyheo-win7x64', 'moreinfo': u'[TEST] CPU \uc0ac\uc6a9\uc728 10% \uc774\uc0c1\n\nCPU \uc0ac\uc6a9\uc728\uc774 \ub192\uc544\uc694~', 'hookName': 'vRealize Operations Manager', 'color': 'green', 'icon': 'http://blogs.vmware.com/management/files/2016/09/vrops-256.png', 'info': u'CPU \uc0ac\uc6a9\uc728\uc774 \ub192\uc544\uc694~', 'Risk': 4, 'url': '', 'fields': [{'content': u'VDI-dyheo-win7x64', 'name': 'Object Name'}, {'content': u'CANCELED', 'name': 'Status'}], 'Efficiency': 1, 'adapterKind': u'VMWARE', 'type': u'ALERT_TYPE_APPLICATION_PROBLEM'} 2017-04-01 16:41:05,012 INFO URL=https://hooks.slack.com/services/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 2017-04-01 16:41:05,015 INFO Headers={'Content-type': 'application/json'} 2017-04-01 16:41:05,017 INFO Body={"attachments": [{"pretext": "[TEST] CPU \uc0ac\uc6a9\uc728 10% \uc774\uc0c1\n\nCPU \uc0ac\uc6a9\uc728\uc774 \ub192\uc544\uc694~"}, {"color": "good", "fallback": "Alert details", "fields": [{"short": true, "title": "Object Name", "value": "VDI-dyheo-win7x64"}, {"short": true, "title": "Status", "value": "CANCELED"}], "text": ""}], "icon_url": "http://blogs.vmware.com/management/files/2016/09/vrops-256.png", "username": "vRealize Operations Manager"} 2017-04-01 16:41:05,025 INFO Check=True 2017-04-01 16:41:05,029 DEBUG Starting new HTTPS connection (1): hooks.slack.com 2017-04-01 16:41:05,449 DEBUG https://hooks.slack.com:443 "POST /services/T3U9PFQ0Z/B4RRRUCG6/AkInetJCjcMJBKhg02anO2sk HTTP/1.1" 200 22 2017-04-01 16:41:05,454 INFO 10.10.0.36 - - [01/Apr/2017 16:41:05] "PUT /endpoint/slack/4861a461-7413-467f-974d-5e58b5d31ef2 HTTP/1.1" 200 -
finally, vROPS alert ACTIVE is not working 2017-04-01 16:43:00,843 INFO 10.10.0.36 - - [01/Apr/2017 16:43:00] "POST /endpoint/slack/57a400fa-5675-4b44-ba21-6d3efc4a4b14 HTTP/1.1" 405 -
I changed PUT to POST on slack.py
slack.py
@app.route("/endpoint/slack", methods=['POST'])
@app.route("/endpoint/slack/<ALERTID
>", methods=['POST'])
That time is different results. Log Insight is both working.
vROPS ACTIVE is working 2017-04-02 00:53:02,736 INFO Parsed={'status': u'ACTIVE', 'criticality': u'ALERT_CRITICALITY_LEVEL_CRITICAL', 'editurl': '', 'Messages': '', 'subType': u'ALERT_SUBTYPE_PERFORMANCE_PROBLEM', 'Health': 4, 'AlertName': u'[TEST] CPU \uc0ac\uc6a9\uc728 10% \uc774\uc0c1', 'resourceName': u'KD-VDP', 'moreinfo': u'[TEST] CPU \uc0ac\uc6a9\uc728 10% \uc774\uc0c1\n\nCPU \uc0ac\uc6a9\uc728\uc774 \ub192\uc544\uc694~', 'hookName': 'vRealize Operations Manager', 'color': 'red', 'icon': 'http://blogs.vmware.com/management/files/2016/09/vrops-256.png', 'info': u'CPU \uc0ac\uc6a9\uc728\uc774 \ub192\uc544\uc694~', 'Risk': 1, 'url': '', 'fields': [{'content': u'KD-VDP', 'name': 'Object Name'}, {'content': u'ACTIVE', 'name': 'Status'}], 'Efficiency': 1, 'adapterKind': u'VMWARE', 'type': u'ALERT_TYPE_APPLICATION_PROBLEM'} 2017-04-02 00:53:02,747 INFO URL=https://hooks.slack.com/services/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 2017-04-02 00:53:02,749 INFO Headers={'Content-type': 'application/json'} 2017-04-02 00:53:02,750 INFO Body={"attachments": [{"pretext": "[TEST] CPU \uc0ac\uc6a9\uc728 10% \uc774\uc0c1\n\nCPU \uc0ac\uc6a9\uc728\uc774 \ub192\uc544\uc694~"}, {"color": "danger", "fallback": "Alert details", "fields": [{"short": true, "title": "Object Name", "value": "KD-VDP"}, {"short": true, "title": "Status", "value": "ACTIVE"}], "text": ""}], "icon_url": "http://blogs.vmware.com/management/files/2016/09/vrops-256.png", "username": "vRealize Operations Manager"} 2017-04-02 00:53:02,756 INFO Check=True 2017-04-02 00:53:02,759 DEBUG Starting new HTTPS connection (1): hooks.slack.com 2017-04-02 00:53:03,469 DEBUG https://hooks.slack.com:443 "POST /services/T3U9PFQ0Z/B4RRRUCG6/AkInetJCjcMJBKhg02anO2sk HTTP/1.1" 200 22 2017-04-02 00:53:03,474 INFO 10.10.0.36 - - [02/Apr/2017 00:53:03] "POST /endpoint/slack/a9f30ce2-bb58-4d75-9b11-ae2630e825c3 HTTP/1.1" 200 -
vROPS CANCELED is not working 2017-04-02 00:58:02,538 INFO 10.10.0.36 - - [02/Apr/2017 00:58:02] "PUT /endpoint/slack/68eb3d82-3fee-44fa-9d7a-44b86fe269ae HTTP/1.1" 405 -
So I try two-way.
I modified slack.py
@app.route("/endpoint/slack", methods=['POST'])
@app.route("/endpoint/slacka/<ALERTID
>", methods=['POST'])
@app.route("/endpoint/slackc/<ALERTID
>", methods=['PUT'])
Then, outbound setting is two-way. first URL is /endpoint/slacka, it is for active alert. alert status is new and updated. Second URL is /endpoint/slackc, it is for cancel alert. alert status is cancelled.
It is current working. but i think just workaround.
OK, so the issue is vROps required both POST and PUT -- this was unclear from the documentation, but makes sense given the differences between a create, an update, and a cancel. The implementation of this is harder than expected so apologies on the delay, but a pull request is up that should address this issue: https://github.com/vmw-loginsight/webhook-shims/pull/31. Note the change is impactful to routes so be sure to review it before just updating once it is merged.
Pull request merged -- please test and confirm.
Marking as resolved -- re-open if you are still experiencing issues.
Hi,
Seem to have an issue where VROPS alerts dont alert into a slack channel when they trigger, but we do get an alert when the alarm clears via the same outbound notification...On the webhook-shim logs i see a post 405 html code for the failed triggers from VROPS and a 200 from the cleared alarms which do get to slack.
Thanks