Closed reefland closed 1 year ago
Hi @reefland, Thanks for reporting this, I will check on my setup and come back to you.
Here's a first fix: https://github.com/dotdc/grafana-dashboards-kubernetes/commit/6e3a73fdeae4ee99b248b3f78044215ab249ead7
This should have been done when the labels got dropped from kube-prometheus-stack.
Let me know if this solves the issue.
Global Network Utilization looks reasonable now, by Namespace is still bonkers:
After re-working my install, this looks different now. I get the per namespace metrics now.
But still puzzled on why the per namespace numbers are way higher than the global numbers. My assumption is this is the difference between local and host networking? Here is the zoomed in version of by Namespaces:
The top 3 are ones that either use Host Networking or talk external to the cluster: ceph is just cluster to cluster member, but uses host networking so I can allow non-cluster computer to use the Rook-Ceph storage if needed. The CSI is using storage to a TrueNAS server and the Monitoring namespace is the only namespace still using that CSI (everything else is using Ceph).
I suspect there is some double counting going on.. the CSI is twice the PVC usage.. the containers virtual adapter and physical adapter? If I look at the underlying query rate(container_network_receive_bytes_total[2m0s])
with just the by namespace part removed, you see the results include multiple network adapters (virtual and physical) and I think this is where the double-counting comes in.
Trying to make the query simpler, I limit it to one namespace and one node of that namespace rate(container_network_receive_bytes_total{namespace="democratic-csi",instance="k3s01"}[2m0s])
. You can see the multiple interfaces being counted:
If I make the global based on container_network_receive_bytes_total
and container_network_transmit_bytes_total
, at least it matches:
So maybe that is the way to do global? :man_shrugging:
Agree, something is probably counted more than once, and this is probably because of the current label selectors. Sometimes, we need to narrow the scope, but in this case, I can't filter on the interfaces names because they are subject to change depending on the setup…
I made the benches a long time ago and it seemed correct, so I either missed something, or something changed since then.
I don't have the time to make the measures right now, but will have a look when I can.
Hi @reefland, sorry for the delay, will look at it this week!
Blocked @jud336 for spamming issues and commits.
Because the device
names can be anything, It's not a good idea to filter on them, but I tried to exclude common virtual patterns to make the panels slightly more clear (we could build upon that).
Check out the latest version that sum by device
, and let me know what you think.
Closing
Describe the bug
On my simple test cluster, I have no issues with the Global Netowrk Utilization, but on my production cluster that does cluster and host networking the numbers are crazy:
No way I have sustained rates like that. I think this is related to the metric:
sum(rate(container_network_receive_bytes_total[$__rate_interval]))
If I look at
rate(container_network_receive_bytes_total[30s])
, I get:I'm not sure what to actually look at here. I tried
sum(rate(node_network_receive_bytes_total[$__rate_interval]))
and I get a reasonable traffic graph:This is 5 nodes, pretty much at idle. Showing I/O by instance:
Here is BTOP+ on
k3s01
running for a bit, lines up very will with data above:How to reproduce?
No response
Expected behavior
No response
Additional context
No response