dotdc / grafana-dashboards-kubernetes

A set of modern Grafana dashboards for Kubernetes.
Apache License 2.0
2.66k stars 368 forks source link

[bug] Global Network Utilization #35

Closed reefland closed 1 year ago

reefland commented 1 year ago

Describe the bug

On my simple test cluster, I have no issues with the Global Netowrk Utilization, but on my production cluster that does cluster and host networking the numbers are crazy:

image

No way I have sustained rates like that. I think this is related to the metric:

sum(rate(container_network_receive_bytes_total[$__rate_interval]))

If I look at rate(container_network_receive_bytes_total[30s]), I get:

{id="/", interface="cni0", job="kubernetes-cadvisor"} | 2041725438.15131
{id="/", interface="enp1s0", job="kubernetes-cadvisor"} | 4821605692.45648
{id="/", interface="flannel.1", job="kubernetes-cadvisor"} | 337125370.2678834

I'm not sure what to actually look at here. I tried sum(rate(node_network_receive_bytes_total[$__rate_interval])) and I get a reasonable traffic graph:

image

This is 5 nodes, pretty much at idle. Showing I/O by instance:

image

Here is BTOP+ on k3s01 running for a bit, lines up very will with data above: image

How to reproduce?

No response

Expected behavior

No response

Additional context

No response

dotdc commented 1 year ago

Hi @reefland, Thanks for reporting this, I will check on my setup and come back to you.

dotdc commented 1 year ago

Here's a first fix: https://github.com/dotdc/grafana-dashboards-kubernetes/commit/6e3a73fdeae4ee99b248b3f78044215ab249ead7

This should have been done when the labels got dropped from kube-prometheus-stack.

Let me know if this solves the issue.

reefland commented 1 year ago

Global Network Utilization looks reasonable now, by Namespace is still bonkers: image

reefland commented 1 year ago

After re-working my install, this looks different now. I get the per namespace metrics now.

image

But still puzzled on why the per namespace numbers are way higher than the global numbers. My assumption is this is the difference between local and host networking? Here is the zoomed in version of by Namespaces:

image

The top 3 are ones that either use Host Networking or talk external to the cluster: ceph is just cluster to cluster member, but uses host networking so I can allow non-cluster computer to use the Rook-Ceph storage if needed. The CSI is using storage to a TrueNAS server and the Monitoring namespace is the only namespace still using that CSI (everything else is using Ceph).

I suspect there is some double counting going on.. the CSI is twice the PVC usage.. the containers virtual adapter and physical adapter? If I look at the underlying query rate(container_network_receive_bytes_total[2m0s]) with just the by namespace part removed, you see the results include multiple network adapters (virtual and physical) and I think this is where the double-counting comes in.

Trying to make the query simpler, I limit it to one namespace and one node of that namespace rate(container_network_receive_bytes_total{namespace="democratic-csi",instance="k3s01"}[2m0s]). You can see the multiple interfaces being counted:

image

If I make the global based on container_network_receive_bytes_total and container_network_transmit_bytes_total, at least it matches:

image

So maybe that is the way to do global? :man_shrugging:

dotdc commented 1 year ago

Agree, something is probably counted more than once, and this is probably because of the current label selectors. Sometimes, we need to narrow the scope, but in this case, I can't filter on the interfaces names because they are subject to change depending on the setup…

I made the benches a long time ago and it seemed correct, so I either missed something, or something changed since then.

I don't have the time to make the measures right now, but will have a look when I can.

dotdc commented 1 year ago

Hi @reefland, sorry for the delay, will look at it this week!

dotdc commented 1 year ago

Blocked @jud336 for spamming issues and commits.

dotdc commented 1 year ago

Because the device names can be anything, It's not a good idea to filter on them, but I tried to exclude common virtual patterns to make the panels slightly more clear (we could build upon that).

Check out the latest version that sum by device, and let me know what you think.

Commit: https://github.com/dotdc/grafana-dashboards-kubernetes/commit/3e717213bc6784449629d083ede134bcb168b8a0

dotdc commented 1 year ago

Closing