Closed zied-chekir closed 3 months ago
Pinging code owners:
receiver/kubeletstats: @dmitryax @TylerHelmuth
See Adding Labels via Comments if you do not have permissions to add labels yourself.
kubectl top node
displays the working_set memory in the MEMORY column (the top command gets metrics from the metrics-server, hence the link to metrics-server implementation). The collector emits the metric k8s.node.memory.working_set
for workingset usage. If you compare the top output with this collector metric, they should be same. The workingset memory usage can be less than the total usage since the former does not include the cache.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been closed as inactive because it has been stale for 120 days with no activity.
Component(s)
receiver/kubeletstats
What happened?
Description
I'm currently collecting memory usage metric of every node with a daemonset collector using the kubestats receiver, but I've noticed an inconsistency in the memory usage values. When comparing the values scraped with the receiver to those obtained with the 'kubectl top nodes' command, there's a notable difference. While the command reports around 11GB of memory usage, the receiver's sum of values is approximately 20GB, which is double the expected value. I am not sure if this is a bug in the receiver or if it is something else. node 1:
node 2:
node 3:
node 4:
all nodes with "k top nodes"
Expected Result
Cluster memory usage should be around 11GB ( sum of all node's memory ).
Actual Result
Cluster memory usage is around 20GB.
Collector version
0.98.0
Environment information
Environment
kubernetes: v1.29.2
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response