wawastein / zabbix-cloudwatch

Cloudwatch integration for Zabbix 3.x
GNU General Public License v3.0
33 stars 53 forks source link

Support for Application Load Balancers (elbv2) #16

Closed diegosainz closed 5 years ago

diegosainz commented 5 years ago

This PR adds basic support for monitoring Application Load Balancers (ALB or ELBv2) as requested by issue #5

Hi,

First of all thank you for sharing your work with the community. Your Zabbix scripts were very useful today to me to get everything up and running quickly. I needed also the ALB support so I added a new discovery class. I'm not a Python developer: feel free to ask for any style changes or a different approach if necessary.

An application load balancer has one or more "Target groups" and each target group has one or more targets.

AWS docs: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html

Note: The Zabbix template is from Zabbix v3.4 (I #don't have a Zabbix 3.0 instance available to add the new items without changing the version)

New Application: ELBv2

Discovery for each target in the application load balancer:

  1. {#TARGET_GROUP_LOAD_BALANCER_NAME}
  2. {#TARGET_GROUP_PORT}
  3. {#TARGET_GROUP_LOAD_BALANCER_ARN}
  4. {#TARGET_GROUP_VPC_ID}
  5. {#TARGET_GROUP_LOAD_BALANCER_DNS_NAME}
  6. {#TARGET_GROUP_ARN}
  7. {#TARGET_GROUP_NAME}
  8. {#TARGET_COUNT}

Item prototypes:

  1. ELBv2: {#TARGET_GROUP_LOAD_BALANCER_NAME} target group {#TARGET_GROUP_NAME} total targets
    • Calculated: instances[{#TARGET_GROUP_LOAD_BALANCER_NAME}][{#TARGET_GROUP_NAME}]
  2. ELBv2: {#TARGET_GROUP_LOAD_BALANCER_NAME} {#TARGET_GROUP_NAME} UnHealthyHostCount (Formula: {#TARGET_COUNT})
    • External check: cloudwatch.metric[120,UnHealthyHostCount,AWS/ApplicationELB,Maximum,{$REGION},"LoadBalancer={#TARGET_GROUP_LOAD_BALANCER_ARN},TargetGroup={#TARGET_GROUP_ARN}",{$ACCOUNT}]

Trigger prototypes:

  1. ELBv2: No data points for {#TARGET_GROUP_LOAD_BALANCER_NAME} target {#TARGET_GROUP_NAME}
  2. ELBv2: {#TARGET_GROUP_LOAD_BALANCER_NAME} target group {#TARGET_GROUP_NAME} has no active instances
  3. ELBv2: {#TARGET_GROUP_LOAD_BALANCER_NAME} target group {#TARGET_GROUP_NAME} unhealthy nodes

Sample discovery output:

./aws_discovery.py --service elbv2 --region us-west-2 --account zabbix
{
  "data": [
    {
      "{#TARGET_GROUP_LOAD_BALANCER_NAME}": "my-alb",
      "{#TARGET_GROUP_PORT}": 80,
      "{#TARGET_GROUP_LOAD_BALANCER_ARN}": "app/my-alb/b44d85146ca026e4",
      "{#TARGET_GROUP_VPC_ID}": "vpc-2f63274a",
      "{#TARGET_GROUP_LOAD_BALANCER_DNS_NAME}": "my-alb-1547203952.us-west-2.elb.amazonaws.com",
      "{#TARGET_GROUP_ARN}": "targetgroup/my_target_group/991198ea77cd3df1",
      "{#TARGET_GROUP_NAME}": "my_target_group",
      "{#TARGET_COUNT}": 2
    }
  ]
}

Example of an item generated by one of the discovery item protoype's:

cloudwatch.metric[120,UnHealthyHostCount,AWS/ApplicationELB,Maximum,{$REGION},"LoadBalancer=app/my-alb/b44d85146ca026e4,TargetGroup=my_target_group/my-alb-cluster01/124f7add7d583f11",{$ACCOUNT}]

Which would result in the following being executed:

./cloudwatch.metric --interval 120 --metric UnHealthyHostCount --namespace AWS/ApplicationELB --statistic Maximum --region us-west-2 --dimension "LoadBalancer=app/my-alb/b44d85146ca026e4,TargetGroup=targetgroup/`my_target_group/991198ea77cd3df1" --account zabbix
wawastein commented 5 years ago

Hi @diegosainz , thank you for your contribution. I'm having a busy week, so I'll review later. Either way, I don't have infrastructure suitable for testing this, so I'll tag current master as 1.0.0, then after merge I'll tag 1.1.0 and add warning to README to fallback to 1.0.0 if something broke.

diegosainz commented 5 years ago

Thanks, @wawastein - no rush on mi side. We currently are using it on production, so I think it should be fine.

nuxwin commented 4 years ago

@diegosainz

With zabbix 4.4 we get a syntax error when importing the template (for the elbv2 item). Could you check on?

diegosainz commented 4 years ago

Hi - @nuxwin. This PR was for Zabbix 3.4. Sadly I don't have access to a Zabbix 4.4, so sadly can't be of much help right now. You should open an issue providing as much detail as possible (specific syntax error) and hope that somebody with Zabbix 4.4 can check it out.

If it's only the template you should be able to create a new template manually. Here are a few screenshots that may be of help:

Selection_969 Selection_965 Selection_966 Selection_967 Selection_968

sherrerq commented 4 years ago

Hi @nuxwin, Did you achieve monitor ALB correctly? I have some problems wit ALB metics

I can monitor HealthyHostCount,RequestCount and UnHealthyHostCount with this item prototype :

./cloudwatch.metric --interval 120 --metric UnHealthyHostCount --namespace AWS/ApplicationELB --statistic Maximum --region eu-west-1 --dimension "LoadBalancer=app/xxxx/xxxxxxxx,TargetGroup=targetgroup/xxxxxxx/xxxxxx" --account xxxxxxxxx

But I cannot get ActiveConnectionCount metrics. I get -1

./cloudwatch.metric --interval 120--metric ActiveConnectionCount --namespace AWS/ApplicationELB --statistic Sum --region eu-west-1 --dimension LoadBalancer=/app/xxxxx/xxxxxxxxxxx --account xxxxxxxxx --debug 1

Any ideas?

Also, I cannot import the template with this item: "Total targets" ¿Any different idea to monitor this metric? Calculate: instances...seems not to work here

Invalid key "instances[{#TARGET_GROUP_LOAD_BALANCER_NAME}][{#TARGET_GROUP_NAME}]" for item prototype "ELBv2: {#TARGET_GROUP_LOAD_BALANCER_NAME} {#TARGET_GROUP_NAME} Total targets" on "Cloudwatch Template": incorrect syntax near "[{#TARGET_GROUP_NAME}]".

Captura

diegosainz commented 4 years ago

@sherrerq maybe there isn't enough data yet for the time interval? Testing on my side it works as expected:

./cloudwatch.metric --interval 600 --metric ActiveConnectionCount --namespace AWS/ApplicationELB --statistic Sum --region us-west-2 --dimension LoadBalancer=app/xxx/yy --account xxx

Result:

{u'Datapoints': [{u'Timestamp': datetime.datetime(2020, 1, 20, 18, 21, tzinfo=tzutc()), u'Sum': 1799.0, u'Unit': 'Count'}], 'ResponseMetadata': {'RetryAttempts': 0, 'HTTPStatusCode': 200, 'RequestId': '6bf25318-1cc6-4fd9-819b-a5bc3940deef', 'HTTPHeaders': {'x-amzn-requestid': '6bf25318-1cc6-4fd9-819b-a5bc3940deef', 'date': 'Mon, 20 Jan 2020 18:31:35 GMT', 'content-length': '497', 'content-type': 'text/xml'}}, u'Label': 'ActiveConnectionCount'}
1799.0

Regarding the template import, you're right. I can see there is a problem (at least importing into Zabbix 4) because the name contains duplicated brackets: metric[first][second] which should be specified once: metric[first-second].

I've adjusted such values for myself (the key name and associated triggers). You can find the adjusted template here: https://pastebin.com/b4ikneYZ. I'm not sure if this is because an additional validation in Zabbix >= 4.

sherrerq commented 4 years ago

Hello @diegosainz ! Thank you very much. I was able to import this new item prototype after your changes.

-Regarding ActiveConnections I am still getting -1 . I do not know what I am doing wrong. I will check.

Another problem I have is that I will need to monitor RDS Aurora cluster. In this case, the dimension should change from DBInstanceIdentifier to DBInstanceIdentifier. Then, I should change or create a new rds.py because the current rds.py only works with DBInstanceIdentifier

I have no idea of Python. Would you mind help me? What changes will do you do in rds.py? Is it enough to change only rds.py or I have to change more files?

This is my rds.py file: [root@salt-master discovery]# cat rds.py

!/usr/bin/python

from basic_discovery import BasicDiscoverer class Discoverer(BasicDiscoverer): def discovery(self, args): response = self.client.describe_db_instances() data = list() for instance in response["DBInstances"]: storage_bytes = int(instance["AllocatedStorage"]) pow(1024, 3) ldd = { "{#STORAGE}": storage_bytes, "{#RDS_ID}": instance["DBInstanceIdentifier"] } data.append(ldd) return data

I should add to this file this information but I do not have an idea of python: response = self.client.describe_db_clusters() data = list() for cluster in response["DBClusters"]: storage_bytes = int(cluster["AllocatedStorage"]) * pow(1024, 3) ldd = { "{#STORAGE}": storage_bytes, "{#RDS_CLUSTER_ID}": cluster["DBClusterIdentifier"] } data.append(ldd) return data

sherrerq commented 4 years ago

@diegosainz and @wawastein Hello again! I am sorry to be a pain but I am still stacked. In the previous comment, I explained to you my problem to monitorize RDS cluster. Now, I need your help to monitor S3 buckets. ./aws_discovery.py --service s3 --region eu-west-1 --account xxxxxxxxx Detectes my buckets getting {"data": [{"{#BUCKET_NAME}": "xxxxxxxx"},xxxxx .

The problem appears when U want to get metrics like BucketSizeBytes or NumberOfObjects. I always get -1 value. ./cloudwatch.metric --interval 600 --metric NumberOfObjects --namespace AWS/S3 --statistic Sum --region eu-west-1 --dimension "BucketName=xxxxxxxxxxxxx,StorageType=StandardStorage" --account xxxxxxxxxxxxxx

The most extrange is that this perform well: aws cloudwatch get-metric-statistics --namespace AWS/S3 --start-time 2019-07-15T10:00:00 --end-time 2020-07-31T01:00:00 --period 86400 --statistics Average --region eu-west-1 --metric-name BucketSizeBytes --dimensions Name=BucketName,Value=xxxxxxxxx Name=StorageType,Value=StandardStorage

I do not know what is the problem.

I would appreciate your help

BR

diegosainz commented 4 years ago

Hi @sherrerq I think maybe we are polluting this PR with non-related comments. But hopefully this will get you in the right way.

Regarding the -1 you are getting is most probably because there is no CloudWatch data for the time period you are specifying. Try a longer time period (it is also useful to graph it in the AWS CloudWatch console - to verify that there is data).

In the case of S3 buckets I had to wait a couple days before the data of new buckets came in and also set the item prototype interval to 86400 seconds (24 hours) and the item period to 345600 (96 hours). For example for Average bucket size item prototype:

cloudwatch.metric[345600,BucketSizeBytes,AWS/S3,Average,{$REGION},"BucketName={#BUCKET_NAME},StorageType=StandardStorage",{$ACCOUNT}]

This is something related on how CloudWatch aggregates the data.

To monitor Aurora RDS clusters you are in the right path. You don't really need to know Python to modify the script (just the basic syntax details). The current rds.py script will correctly discover the instances, but if you want to have also the data at cluster level the following should work:

#!/usr/bin/python
from basic_discovery import BasicDiscoverer

class Discoverer(BasicDiscoverer):
    def discovery(self, *args):
        response = self.client.describe_db_clusters()
        data = list()

        for cluster in response["DBClusters"]:

            storage_bytes = int(cluster["AllocatedStorage"]) * pow(1024, 3)
            for instance in cluster["DBClusterMembers"]:

                data.append({
                    "{#CLUSTER_ID}": cluster["DBClusterIdentifier"],
                    "{#STORAGE}": storage_bytes,
                    "{#MEMBER_ID}": instance["DBInstanceIdentifier"]
                 })

        return data

This will output the following:

{
  "data": [
    {
      "{#CLUSTER_ID}": "myCluster01",
      "{#MEMBER_ID}": "myInstance01",
      "{#STORAGE}": 1073741824
    },
    {
      "{#CLUSTER_ID}": "myCluster01",
      "{#MEMBER_ID}": "myInstance02",
      "{#STORAGE}": 1073741824
    }
  ]
}
sherrerq commented 4 years ago

Dear @diegosainz , Your help is being vital for me. Thank you very much. It could be possible that I am polluting this OR. What is be the best channel to continue with our conversation? What do you prefer? Related to your rds.py script, it looks very nice and performs very well but the problem is that I cannot rename it as rds-bak.py (for example) because of aws_discovery.py only recognize rds as a parameter. What I mean is that I need to monitor both kinds of RDS (instances and cluster). I suppose that the solution is to create a rds.py scrip where both of them being taken into account. Do you know how this script looks like?


[root@salt-master zabbix]# ./aws_discovery.py --service rds-bak --region eu-west-1 --account xxxxxx Traceback (most recent call last): File "./aws_discovery.py", line 43, in args.service, args.region) File "/usr/lib/zabbix/discovery/aws_client.py", line 15, in init region_name=region) File "/usr/lib/python2.7/site-packages/boto3/init.py", line 79, in client return _get_default_session().client(*args, *kwargs) File "/usr/lib/python2.7/site-packages/boto3/session.py", line 192, in client aws_session_token=aws_session_token, config=config) File "/usr/lib/python2.7/site-packages/botocore/session.py", line 835, in create_client client_config=config, api_version=api_version) File "/usr/lib/python2.7/site-packages/botocore/client.py", line 76, in create_client service_model = self._load_service_model(service_name, api_version) File "/usr/lib/python2.7/site-packages/botocore/client.py", line 114, in _load_service_model api_version=api_version) File "/usr/lib/python2.7/site-packages/botocore/loaders.py", line 132, in _wrapper data = func(self, args, **kwargs) File "/usr/lib/python2.7/site-packages/botocore/loaders.py", line 378, in load_service_model known_service_names=', '.join(sorted(known_services))) botocore.exceptions.UnknownServiceError: Unknown service: 'rds-bak'. Valid service names are: accessanalyzer, acm, acm-pca, alexaforbusiness, amplify, apigateway, apigatewaymanagementapi, apigatewayv2, appconfig, application-autoscaling, application-insights, appmesh, appstream, appsync, athena, autoscaling, autoscaling-plans, backup, batch, budgets, ce, chime, cloud9, clouddirectory, cloudformation, cloudfront, cloudhsm, cloudhsmv2, cloudsearch, cloudsearchdomain, cloudtrail, cloudwatch, codebuild, codecommit, codedeploy, codeguru-reviewer, codeguruprofiler, codepipeline, codestar, codestar-connections, codestar-notifications, cognito-identity, cognito-idp, cognito-sync, comprehend, comprehendmedical, compute-optimizer, config, connect, connectparticipant, cur, dataexchange, datapipeline, datasync, dax, detective, devicefarm, directconnect, discovery, dlm, dms, docdb, ds, dynamodb, dynamodbstreams, ebs, ec2, ec2-instance-connect, ecr, ecs, efs, eks, elastic-inference, elasticache, elasticbeanstalk, elastictranscoder, elb, elbv2, emr, es, events, firehose, fms, forecast, forecastquery, frauddetector, fsx, gamelift, glacier, globalaccelerator, glue, greengrass, groundstation, guardduty, health, iam, imagebuilder, importexport, inspector, iot, iot-data, iot-jobs-data, iot1click-devices, iot1click-projects, iotanalytics, iotevents, iotevents-data, iotsecuretunneling, iotthingsgraph, kafka, kendra, kinesis, kinesis-video-archived-media, kinesis-video-media, kinesis-video-signaling, kinesisanalytics, kinesisanalyticsv2, kinesisvideo, kms, lakeformation, lambda, lex-models, lex-runtime, license-manager, lightsail, logs, machinelearning, macie, managedblockchain, marketplace-catalog, marketplace-entitlement, marketplacecommerceanalytics, mediaconnect, mediaconvert, medialive, mediapackage, mediapackage-vod, mediastore, mediastore-data, mediatailor, meteringmarketplace, mgh, migrationhub-config, mobile, mq, mturk, neptune, networkmanager, opsworks, opsworkscm, organizations, outposts, personalize, personalize-events, personalize-runtime, pi, pinpoint, pinpoint-email, pinpoint-sms-voice, polly, pricing, qldb, qldb-session, quicksight, ram, rds, rds-data, redshift, rekognition, resource-groups, resourcegroupstaggingapi, robomaker, route53, route53domains, route53resolver, s3, s3control, sagemaker, sagemaker-a2i-runtime, sagemaker-runtime, savingsplans, schemas, sdb, secretsmanager, securityhub, serverlessrepo, service-quotas, servicecatalog, servicediscovery, ses, sesv2, shield, signer, sms, sms-voice, snowball, sns, sqs, ssm, sso, sso-oidc, stepfunctions, storagegateway, sts, support, swf, textract, transcribe, transfer, translate, waf, waf-regional, wafv2, workdocs, worklink, workmail, workmailmessageflow, workspaces, xray

Best Regards

diegosainz commented 4 years ago

@sherrerq open a new issue describing what you need (something along "Support for RDS Clusters") and ping me there. I think the support, as you say, can be added in the same rds script. I've a couple busy days ahead with meetings, but I'll try to work on something the next couple weeks and send the pull request to @wawastein

sherrerq commented 4 years ago

Dear @diegosainz I have followed your instructions and I opened a new issue. I could not ping you from there. This is the issue. Thank you very much.!!

https://github.com/wawastein/zabbix-cloudwatch/issues/25#issue-555066560

sherrerq commented 4 years ago

Dear Diego,

How are things?

I did some advances in my AWS monitoring thanks to Mr Wawastein . For example, I modified rds.py for monitor db instances and clusters at the same time. The file looks like this:

!/usr/bin/python

from basic_discovery import BasicDiscoverer

class Discoverer(BasicDiscoverer): def discovery(self, *args):

Single instances first

    response = self.client.describe_db_instances()
    data = list()
    for instance in response["DBInstances"]:
        storage_bytes = int(instance["AllocatedStorage"]) * pow(1024, 3)
        ldd = {
                "{#STORAGE}": storage_bytes,
                "{#RDS_ID}": instance["DBInstanceIdentifier"]
        }
        data.append(ldd)
    # Clusters
    response = self.client.describe_db_clusters()
    for cluster in response["DBClusters"]:
        storage_bytes = int(cluster["AllocatedStorage"]) * pow(1024, 3)
        for instance in cluster["DBClusterMembers"]:
            data.append({
                "{#CLUSTER_ID}": cluster["DBClusterIdentifier"],
                "{#STORAGE}": storage_bytes,
                "{#MEMBER_ID}": instance["DBInstanceIdentifier"]
            })
    return data

Now, I can detect my Aurora cluster

[root@salt-master zabbix]# ./aws_discovery.py --service rds --region eu-west-1 --account xxxxxxxx {"data": [{"{#RDS_ID}": "instance-1", "{#STORAGE}": 1073741824}, {"{#CLUSTER_ID}": "cluster", "{#MEMBER_ID}": "instance-1", "{#STORAGE}": 1073741824}]}

Also, I can get data from the AWS metrics. ./cloudwatch.metric --interval 1200 --metric VolumeWriteIOPs --namespace AWS/RDS --statistic Average --region eu-west-1 --dimension DBClusterIdentifier=cluster --account xxxxxxxx 1193.33333333

Al seems to work fine but these items do not appear in my Zabbix dashboard. I mean, I have these metrics configured as an item prototype and it does not generate the items in my host ¿?¿? These are some of my item prototypes

[cid:eb3d002e-d003-4061-ba59-796af4a0760a]

An example of Instance item prototype: [cid:9cea1ff9-d9fd-404f-a385-d57fdf398711]

An example of cluster item prototype (cloned from one instance item prototype which preforms well):

[cid:d5980ab2-a1d1-4610-8c57-6c2adba9e1c8]

The problem is that the cluster items are not created in my host and when I go to Monitoring/latest data only appear metrics from instances, clusters no appear.

¿Any ideas?

Best regards


De: Diego Sainz notifications@github.com Enviado: jueves, 23 de enero de 2020 22:49 Para: wawastein/zabbix-cloudwatch zabbix-cloudwatch@noreply.github.com Cc: Sergio Herrero Querol sergio.herrero.querol@everis.com; Mention mention@noreply.github.com Asunto: Re: [wawastein/zabbix-cloudwatch] Support for Application Load Balancers (elbv2) (#16)

@sherrerqhttps://github.com/sherrerq open a new issue describing what you need (something along "Support for RDS Clusters") and ping me there. I think the support, as you say, can be added in the same rds script. I've a couple busy days ahead with meetings, but I'll try to work on something the next couple weeks and send the pull request to @wawasteinhttps://github.com/wawastein

diegosainz commented 4 years ago

Hi Sergio, sorry for the late reply, things have been busy at work. I'll spare some time tomorrow on this.

On Mon, Feb 10, 2020, 3:08 AM Sergio Herrero Querol < notifications@github.com> wrote:

Dear Diego,

How are things?

I did some advances in my AWS monitoring thanks to Mr Wawastein . For example, I modified rds.py for monitor db instances and clusters at the same time. The file looks like this:

!/usr/bin/python

from basic_discovery import BasicDiscoverer

class Discoverer(BasicDiscoverer): def discovery(self, *args):

Single instances first

response = self.client.describe_db_instances() data = list() for instance in response["DBInstances"]: storage_bytes = int(instance["AllocatedStorage"]) * pow(1024, 3) ldd = { "{#STORAGE}": storage_bytes, "{#RDS_ID}": instance["DBInstanceIdentifier"] } data.append(ldd)

Clusters

response = self.client.describe_db_clusters() for cluster in response["DBClusters"]: storage_bytes = int(cluster["AllocatedStorage"]) * pow(1024, 3) for instance in cluster["DBClusterMembers"]: data.append({ "{#CLUSTER_ID}": cluster["DBClusterIdentifier"], "{#STORAGE}": storage_bytes, "{#MEMBER_ID}": instance["DBInstanceIdentifier"] }) return data

Now, I can detect my Aurora cluster

[root@salt-master zabbix]# ./aws_discovery.py --service rds --region eu-west-1 --account xxxxxxxx {"data": [{"{#RDS_ID}": "instance-1", "{#STORAGE}": 1073741824}, {"{#CLUSTER_ID}": "cluster", "{#MEMBER_ID}": "instance-1", "{#STORAGE}": 1073741824}]}

Also, I can get data from the AWS metrics. ./cloudwatch.metric --interval 1200 --metric VolumeWriteIOPs --namespace AWS/RDS --statistic Average --region eu-west-1 --dimension DBClusterIdentifier=cluster --account xxxxxxxx 1193.33333333

Al seems to work fine but these items do not appear in my Zabbix dashboard. I mean, I have these metrics configured as an item prototype and it does not generate the items in my host ¿?¿? These are some of my item prototypes

[cid:eb3d002e-d003-4061-ba59-796af4a0760a]

An example of Instance item prototype: [cid:9cea1ff9-d9fd-404f-a385-d57fdf398711]

An example of cluster item prototype (cloned from one instance item prototype which preforms well):

[cid:d5980ab2-a1d1-4610-8c57-6c2adba9e1c8]

The problem is that the cluster items are not created in my host and when I go to Monitoring/latest data only appear metrics from instances, clusters no appear.

¿Any ideas?

Best regards

[everis]

Sergio Herrero Querol

Centro empresarial El Trovador Pza. Antonio Beltran Martinez, 1 - 9 planta. Oficina I.

50002 Zaragoza Tel.: + 34 976 482 080 - Ext.: 117011

Fax: +34 976 482 291 everis.comhttp://www.everis.com/


De: Diego Sainz notifications@github.com Enviado: jueves, 23 de enero de 2020 22:49 Para: wawastein/zabbix-cloudwatch zabbix-cloudwatch@noreply.github.com Cc: Sergio Herrero Querol sergio.herrero.querol@everis.com; Mention < mention@noreply.github.com> Asunto: Re: [wawastein/zabbix-cloudwatch] Support for Application Load Balancers (elbv2) (#16)

@sherrerqhttps://github.com/sherrerq open a new issue describing what you need (something along "Support for RDS Clusters") and ping me there. I think the support, as you say, can be added in the same rds script. I've a couple busy days ahead with meetings, but I'll try to work on something the next couple weeks and send the pull request to @wawastein< https://github.com/wawastein>

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub< https://github.com/wawastein/zabbix-cloudwatch/pull/16?email_source=notifications&email_token=ANC2VSC3LA5RDBTWZNRSIKDQ7IGGXA5CNFSM4FR3U5N2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJY7GYI#issuecomment-577893217>, or unsubscribe< https://github.com/notifications/unsubscribe-auth/ANC2VSAPQOYRQAN6WUEDRCLQ7IGGXANCNFSM4FR3U5NQ

.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/wawastein/zabbix-cloudwatch/pull/16?email_source=notifications&email_token=AAGBQ7W2WWDAS7LHIOLEFBDRCEYRRA5CNFSM4FR3U5N2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELIDOWI#issuecomment-584071001, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGBQ7TRZAZVFFFT5UO2DMTRCEYRRANCNFSM4FR3U5NQ .