Closed ziansong closed 7 months ago
ADD: I only turn on PROMETHEUS_RDS_EXPORTER_COLLECT_INSTANCE_TYPES: true turn off other If delete ec2 privilege in aws iam role I can get rds_instance_info metrics
I have the same problem, do you have a solution? Thanks
Thanks for reporting this issue, sorry about that issue. It's definitely a bug that we must fix!
I'll propose fix by the middle of next week.
We just released the 0.8.1 that contains the fix.
Can you update to this version and confirm that it works as expected?
Bonjour, merci pour le retour.
Je viens de tester votre nouvelle version 0.8.1 et j'ai comme erreur :
│ {"time":"2024-03-25T15:24:36.471717476Z","level":"INFO","msg":"starting the HTTP server component"} │
│ {"time":"2024-03-25T15:24:47.29626138Z","level":"INFO","msg":"get RDS metrics"} │
│ panic: runtime error: invalid memory address or nil pointer dereference │
│ [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x1c206ff] │
│ │
│ goroutine 111 [running]: │
│ github.com/qonto/prometheus-rds-exporter/internal/app/ec2.(*EC2Fetcher).GetDBInstanceTypeInformation(0xc0006b9f40, {0xc000540a00, 0x9, 0x10}) │
│ /home/runner/work/prometheus-rds-exporter/prometheus-rds-exporter/internal/app/ec2/ec2.go:95 +0x8bf │
│ github.com/qonto/prometheus-rds-exporter/internal/app/exporter.(*rdsCollector).getEC2Metrics(0xc0000f8a08, {0x7fe2caf7d440, 0xc0002fa4e0}, {0xc000540a00, 0x9, 0x10}) │
│ /home/runner/work/prometheus-rds-exporter/prometheus-rds-exporter/internal/app/exporter/exporter.go:436 +0x14f │
│ created by github.com/qonto/prometheus-rds-exporter/internal/app/exporter.(*rdsCollector).fetchMetrics in goroutine 62 │
│ /home/runner/work/prometheus-rds-exporter/prometheus-rds-exporter/internal/app/exporter/exporter.go:376 +0x537
Voici ma configuration values.yaml :
#
# Exporter settings
#
# Enable debug mode
debug: true
# Log format (text or json)
# log-format: json
# Path under which to expose metrics
# metrics-path: /metrics
# Address to listen on for web interface
# listen-address: ":9043"
# Path to TLS certificate
# tls-cert-path: ""
# Path to private key for TLS
# tls-key-path: ""
# Enable OpenTelemetry traces
# See https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter for configuration parameters
# enable-otel-traces: true
#
# AWS credentials
#
# AWS IAM ARN role to assume to fetch metrics
# aws-assume-role-arn: arn:aws:iam::000000000000:role/prometheus-rds-exporter
# AWS assume role session name
# aws-assume-role-session: prometheus-rds-exporter
#
# Metrics
#
# Collect AWS instances metrics (AWS Cloudwatch API)
collect-instance-metrics: true
# Collect AWS instance tags (AWS RDS API)
collect-instance-tags: true
# Collect AWS instance types information (AWS EC2 API)
collect-instance-types: true
# Collect AWS instances logs size (AWS RDS API)
collect-logs-size: true
# Collect AWS instances maintenances (AWS RDS API)
collect-maintenances: true
# Collect AWS RDS quotas (AWS quotas API)
collect-quotas: true
# Collect AWS RDS usages (AWS Cloudwatch API)
collect-usages: true
I have the same problem, do you have a solution? I only use collect-instance-types: true
Sorry about that! I'm reopening this bug and will propose a more complete fix.
The AWS SDK could return a null pointer for RDS instances that do not support optimized EBS, we were not testing this case and https://github.com/qonto/prometheus-rds-exporter/pull/152 was not enough to handle it.
We adopted a more robust approach with https://github.com/qonto/prometheus-rds-exporter/pull/154.
We have just released the v0.8.2, which should finally solve this problem.
@Antoine-Sevec @ziansong Can you update to this version and confirm that it works as expected?
@vmercierfr
C'est résolu pour moi, merci pour la réactivité !
│ {"time":"2024-03-26T12:03:35.208829158Z","level":"INFO","msg":"starting the HTTP server component"} │
│ {"time":"2024-03-26T12:03:47.295649385Z","level":"INFO","msg":"get RDS metrics"} │
│ {"time":"2024-03-26T12:04:17.294120478Z","level":"INFO","msg":"get RDS metrics"} │
│ {"time":"2024-03-26T12:04:47.29391695Z","level":"INFO","msg":"get RDS metrics"} │
│ {"time":"2024-03-26T12:05:17.293736005Z","level":"INFO","msg":"get RDS metrics"} │
│ {"time":"2024-03-26T12:05:47.294273412Z","level":"INFO","msg":"get RDS metrics"} │
│ {"time":"2024-03-26T12:06:17.294381841Z","level":"INFO","msg":"get RDS metrics"} │
│ {"time":"2024-03-26T12:06:47.294403924Z","level":"INFO","msg":"get RDS metrics"} │
│ {"time":"2024-03-26T12:07:17.294397064Z","level":"INFO","msg":"get RDS metrics"} │
│ {"time":"2024-03-26T12:07:47.294003217Z","level":"INFO","msg":"get RDS metrics"}
Great news!
We will adopt this approach for other parts of the exporter to make it more resistant.
I am running 0.8.0 app version with aws role that has admin access When I run this app locally against Account A (with only mysql db instance) it worked However when I run this app locally against Account B (this one has multiple instances with mysql postgreSQL) I get the below error
{"time":"2024-03-18T03:03:37.915452047Z","level":"INFO","msg":"starting the HTTP server component"} {"time":"2024-03-18T03:03:52.446299831Z","level":"INFO","msg":"get RDS metrics"} panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x1c11be2]
goroutine 100 [running]: github.com/qonto/prometheus-rds-exporter/internal/app/ec2.(EC2Fetcher).GetDBInstanceTypeInformation(0xc00046df40, {0xc000434400, 0xc, 0x10}) /home/runner/work/prometheus-rds-exporter/prometheus-rds-exporter/internal/app/ec2/ec2.go:94 +0x842 github.com/qonto/prometheus-rds-exporter/internal/app/exporter.(rdsCollector).getEC2Metrics(0xc00015c508, {0x7f700a6438c0, 0xc000292680}, {0xc000434400, 0xc, 0x10}) /home/runner/work/prometheus-rds-exporter/prometheus-rds-exporter/internal/app/exporter/exporter.go:436 +0x14f created by github.com/qonto/prometheus-rds-exporter/internal/app/exporter.(*rdsCollector).fetchMetrics in goroutine 97 /home/runner/work/prometheus-rds-exporter/prometheus-rds-exporter/internal/app/exporter/exporter.go:376 +0x537