nerdswords / yet-another-cloudwatch-exporter

Prometheus exporter for AWS CloudWatch - Discovers services through AWS tags, gets CloudWatch metrics data and provides them as Prometheus metrics with AWS tags as labels
Apache License 2.0
967 stars 331 forks source link

Getting duplicate RDS metrics #373

Open v1ctorrhs opened 3 years ago

v1ctorrhs commented 3 years ago

Hi guys,

I'm running v0.26.3-alpha inside an EKS cluster.

Authentication is handled using a service account that has an IAM role attached to it.

I'm running the following yace-exporter configuration for a single RDS Postgresql instance and I keep getting the same metric multiple times with different labels.

config: |
  discovery:
    jobs:
    - type: rds
      regions:
      - us-east-1
      searchTags:
        - key: environment
          value: test
      metrics:
        - name: NetworkReceiveThroughput
          statistics:
          - Average
          period: 300
          length: 3600
aws_rds_network_receive_throughput_average{account_id="XXXXX",dimension_DBInstanceIdentifier="",dimension_DatabaseClass="",dimension_EngineName="",name="global",region="us-east-1"} 2217.2519934646384
aws_rds_network_receive_throughput_average{account_id="XXXXX",dimension_DBInstanceIdentifier="",dimension_DatabaseClass="",dimension_EngineName="postgres",name="global",region="us-east-1"} 2217.2519934646384
aws_rds_network_receive_throughput_average{account_id="XXXXX",dimension_DBInstanceIdentifier="",dimension_DatabaseClass="db.t3.medium",dimension_EngineName="",name="global",region="us-east-1"} 2217.2519934646384
aws_rds_network_receive_throughput_average{account_id="XXXXX",dimension_DBInstanceIdentifier="monitoring-grafana-postgres",dimension_DatabaseClass="",dimension_EngineName="",name="arn:aws:rds:us-east-1:XXXXX:db:monitoring-grafana-postgres",region="us-east-1"} 2217.2519934646384

Adding another metric results in the same behavior

aws_rds_maximum_used_transaction_ids_average{account_id="XXXXX",dimension_DBInstanceIdentifier="",dimension_DatabaseClass="",dimension_EngineName="postgres",name="global",region="us-east-1"} 2545.4
aws_rds_maximum_used_transaction_ids_average{account_id="XXXXX",dimension_DBInstanceIdentifier="",dimension_DatabaseClass="db.t3.medium",dimension_EngineName="",name="global",region="us-east-1"} 2545.4
aws_rds_maximum_used_transaction_ids_average{account_id="XXXXX",dimension_DBInstanceIdentifier="monitoring-grafana-postgres",dimension_DatabaseClass="",dimension_EngineName="",name="arn:aws:rds:us-east-1:XXXXX:db:monitoring-grafana-postgres",region="us-east-1"} 2545.4
sepich commented 3 years ago

It's because all possible dimensions are discovered after https://github.com/ivx/yet-another-cloudwatch-exporter/pull/315 You can revert to v0.25.0-alpha and use:

    - type: rds
      regions:
      - us-east-1
      searchTags:
        - key: environment
          value: test
      awsDimensions: [DBInstanceIdentifier]
      metrics:
        - name: NetworkReceiveThroughput
          statistics:
          - Average
          period: 300
          length: 3600
dmitry-tiger commented 3 years ago

Can we add old behaviour as a feature? Searching or making graphs through different sets of labels for the same metric become much trickier. With that behaviour you also have to make aggregation through metric more carefully to avoid duplicates.

dfredell commented 2 years ago

Might be the same issue as #404