Open jseiser opened 2 years ago
Hi @jseiser, I've started to work on metrics set update in my PR: https://github.com/prometheus-community/ecs_exporter/pull/46. Welcome to participate in that.
Has anyone seen a situation where these are non-zero? It should notionally be possible if any CPU limits are set on the task or the container, but even with such things in place I do not see anything moving. My expectation is that periods
would, well, periodically increase if those conditions are met, even if no throttling is taking place.
I can try to experiment with creating a while (true) {}
container to see if I can make the stats move when throttling is actually happening.
I ran tasks with both ecs_exporter and an alpine
sidecar running ["/bin/sh", "-c", "yes > /dev/null"]
(i.e. chewing up a lot of CPU) on Fargate and EC2. They both had less than 1 vCPU allocated, Fargate at the task level and EC2 at the container level. The CPU-seconds metrics for both were definitely increasing slower than real time passed, indicating that throttling was occurring. The builtin CloudWatch graphs available in the AWS console also indicated that these services were using all available CPU.
But the throttling container stats remained at 0. I'm not sure why, but regardless I think this is a dead end without action from AWS.
Information is contained here:
curl -o stats.json "${ECS_CONTAINER_METADATA_URI_V4}/task/stats"