nats-io / k8s

NATS on Kubernetes with Helm Charts
Apache License 2.0
455 stars 305 forks source link

Prometheus discovery annotations not set on NATS (JetStream) deployment #828

Closed JohanLindvall closed 7 months ago

JohanLindvall commented 11 months ago

What version were you using?

Helm chart 1.1.5, Values.yaml:

global:
  image:
    pullSecretNames:
      - redacted
    registry: redacted

nats:
  config:
    cluster:
      enabled: true
      replicas: 3
    jetstream:
      enabled: true
      fileStore:
        pvc:
          size: 10Gi
    websocket:
      enabled: true

  podTemplate:
    topologySpreadConstraints:
      kubernetes.io/hostname:
        maxSkew: 1
        whenUnsatisfiable: DoNotSchedule

  container:
    env:
      GOMEMLIMIT: 2500MiB
    merge:
      resources:
        requests:
          cpu: "1"
          memory: 3Gi
        limits:
          memory: 3Gi

  natsBox:
    container:
      image:
        repository: nats-box

  reloader:
    image:
      repository: nats-server-config-reloader

  promExporter:
    enabled: true
    image:
      repository: prometheus-nats-exporter

Prometheus doesn't discovery the promExporter container, because the pod doesn't have the appropriate labels. See https://github.com/nats-io/k8s/pull/77/files where it was added (but never copied over)

What environment was the server running in?

Kubernetes, see above

Is this defect reproducible?

Yes

Given the capability you are leveraging, describe your expectation?

I expect the metrics endpoint to be automatically discovered by Prometheus

Given the expectation, what is the defect you are observing?

The metrics endpoint isn't discovered.

caleblloyd commented 11 months ago

Which operator still uses those annotations?

caleblloyd commented 11 months ago

From kube-prometheus-stack:

The prometheus operator does not support annotation-based discovery of services, using the PodMonitor or ServiceMonitor CRD in its place

JohanLindvall commented 9 months ago

Sorry for the very slow reply. We are not using the Prometheus operator. We are using a plain old Prometheus deployment, configured according to https://github.com/prometheus/prometheus/blob/main/documentation/examples/prometheus-kubernetes.yml#L267

@caleblloyd

caleblloyd commented 7 months ago

I recommend adding the labels you need for your Prometheus deployment via:

podTemplate:
  merge:
    metadata:
      labels:
        your-label: here
a-h commented 5 months ago

I ran into this today. Needed to add this to my values.yaml.

promExporter:
  enabled: true

podTemplate:
  merge:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/path: "/metrics"
        prometheus.io/port: "7777"

Without the annotations, Prometheus doesn't automatically scrape the NATS pods. Would be better if the NATS Helm chart applied the annotations if promExporter is enabled.

joni-jones commented 3 months ago

I faced the same issue. At least the documentation should state that in addition to

promExporter:
    enabled: true

The K8s annotations should be set as well.