Stackdriver / stackdriver-prometheus-sidecar

A sidecar for the Prometheus server that can send metrics to Stackdriver.
https://cloud.google.com/monitoring/kubernetes-engine/prometheus
Apache License 2.0
120 stars 43 forks source link

Autodetect GKE resource attributes #250

Open ocervell opened 4 years ago

ocervell commented 4 years ago

I'd like to deploy the SD prometheus sidecar without having to patch (i.e: just one YAML, no subst).

Ideally it would look like this:

      containers:
      - name: prometheus
        image: quay.io/prometheus/prometheus:v2.6.0
        imagePullPolicy: Always
        args:
        - "--config.file=/etc/prometheus/config/prometheus.yaml"
        - "--storage.tsdb.path=/data"
        - "--storage.tsdb.min-block-duration=15m"
        - "--storage.tsdb.max-block-duration=4h"
        - "--storage.tsdb.retention=48h"
        ports:
        - name: prometheus
          containerPort: 9090
        volumeMounts:
        - name: config-volume
          mountPath: /etc/prometheus/config
        - name: data-volume
          mountPath: /data

      - name: stackdriver-prometheus-sidecar
        image: gcr.io/stackdriver-prometheus/stackdriver-prometheus-sidecar:0.7.5
        imagePullPolicy: Always
        args:
        - "--prometheus.wal-directory=/data/wal"
        # Those should be auto-detected based on the env it runs on (e.g: using GKE metadata server)
        - "--stackdriver.project-id=auto"
        - "--stackdriver.kubernetes.location=auto" 
        - "--stackdriver.kubernetes.cluster-name=auto"

        ports:
        - name: sidecar
          containerPort: 9091
        volumeMounts:
        - name: data-volume
          mountPath: /data

Any idea if we could have autodetected defaults for the arguments --stackdriver.project_id, --stackdriver.kubernetes.location and --stackdriver.kubernetes.cluster-name ? They would default to the GKE cluster they are running on.

n-oden commented 1 year ago

FWIW, we handle this by querying the GKE metadata service on startup, e.g.

PROJECT_ID=$(curl -sf "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
CLUSTER_NAME=$(curl -sf "http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name" -H "Metadata-Flavor: Google")

(You have to override the container command and then call the sidecar yourself obviously. Also beware that in some cases the metadata service availability may lag pod startup by a few seconds so you'll need to check your outputs.)