hashicorp / nomad

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
https://www.nomadproject.io/
Other
14.98k stars 1.96k forks source link

Backport of [gh-24339] Move from streaming stats to polling for docker into release/1.9.x #24527

Closed hc-github-team-nomad-core closed 4 days ago

hc-github-team-nomad-core commented 4 days ago

Backport

This PR is auto-generated from #24525 to be assessed for backporting due to the inclusion of the label backport/1.9.x.

The below text is copied from the body of the original PR.


Description

It addresses 24339.

Currently to get the stats from a container we are using the call to ContainerStats which returns a stream of data that can be reused, but it only works if the sampling interval for the metrics is 1s exactly. With other values, decoding errors start to show up and the metric values becomes inconsistent:

2024-11-21T13:14:17.645+0100 [DEBUG] client.driver_mgr.docker: error decoding stats data from container: container_id=2317edae0f7a0e1fb2e259a2e87f2575cf6114a2c2c47c3eaf7f992de0b7a569 driver=docker error="json: cannot unmarshal string into Go value of type container.Stats"
2024-11-21T13:14:17.651+0100 [DEBUG] client.driver_mgr.docker: error decoding stats data from container: container_id=f062a6097af27095c1e8eec76005d33c670edaadb2e5b74540f83704f0260a60 driver=docker error="invalid character 'a' looking for beginning of value"

It might have gone unnoticed because the error messages were displayed only for debug, which rarely happens on production environments.

To avoid the sync problem, this PR introduces a polling call to the stats endpoint, which is not as efficient, but is more versatile and allows for different sampling intervals.

Testing & Reproduction steps

To see the bug, take the latest release and start a nomad agent, it can be in -dev mode, with telemetry enabled and a collection_interval different from 1s.

telemetry {
  collection_interval        = "3s"
  prometheus_metrics         = true
  publish_allocation_metrics = true
}

Run a job with multiple allocations and after a while the errors will show and the metrics for the allocations will be unstable.

Links

Contributor Checklist

Reviewer Checklist


Overview of commits - a9e7166b6b182e31d438be9c75c438bcfc41c951