robscott / kube-capacity

A simple CLI that provides an overview of the resource requests, limits, and utilization in a Kubernetes cluster
Apache License 2.0
2.11k stars 114 forks source link

CPU limits and requests totals not always matching sum of containers requests/limits #53

Open chilanti opened 3 years ago

chilanti commented 3 years ago

First of all - great tool - simple to use and powerful. We noticed that in some cases the totals that the tool rolls up at the pod level do not match the total cpu limits/requests of the actual containers in the pods. This seems to happen when the pod has init containers that specify cpu requests/limits. For example:

        {
          "name": "zen-core-api-6bb6b6d64c-p624c",
          "namespace": "cp4ba",
          "cpu": {
            "requests": "100m",
            "requestsPercent": "0%",
            "limits": "2",
            "limitsPercent": "12%"
          },
          "memory": {
            "requests": "256Mi",
            "requestsPercent": "0%",
            "limits": "2Gi",
            "limitsPercent": "3%"
          },
          "containers": [
            {
              "name": "zen-core-api-container",
              "cpu": {
                "requests": "100m",
                "requestsPercent": "0%",
                "limits": "400m",
                "limitsPercent": "2%"
              },
              "memory": {
                "requests": "256Mi",
                "requestsPercent": "0%",
                "limits": "1Gi",
                "limitsPercent": "1%"
              }
            }
          ]
        }

In this case, there's only one active container in the pod and its cpu.limits are 400m - but the total reported at the pod level says cpu.limits is 2. We looked at the pod definition on the actual cluster and saw that it has an init container whose cpu.limits are in fact 2: image At this point we are left wondering whether this is an expected behavior - and if it is, whether the tool picks up the greater of the two values or just picks the first one for the pod. Thanks.

robscott commented 3 years ago

Hey @chilanti, thanks for asking about this! I'm guessing this is a bug, do you happen to know which version of kube-capacity you're using?

chilanti commented 3 years ago

@robscott - thanks for getting back - I guess it's 0.5.0:

➜  kube-capacity kube-capacity version
kube-capacity version 0.5.0
robscott commented 3 years ago

There are some more recent bug fixes in the latest release (0.6.1), I'm hoping they'll fix your issue, but let me know if not.

chilanti commented 3 years ago

Hi Rob - I just upgraded to 0.6.1, but that particular issue is still there. I still see that the pod limits are set to 2, but the only active container has limits=400m.

kmlefebv commented 2 years ago

Hi @robscott I was wondering if you have an update on this issue.

robscott commented 2 years ago

Thanks for the detailed bug report @chilanti and the reminder @kmlefebv! I misunderstood this the first time, but after digging a bit further, it looks like the requests and limits are using a k8s util function: https://github.com/robscott/kube-capacity/blob/master/pkg/capacity/resources.go#L130. This appears to result in the max value of init or active requests/limits: https://github.com/kubernetes/kubernetes/blob/33de444861d3de783a6618be9d10fa84da1c11b4/pkg/api/v1/resource/helpers.go#L56 as you'd suggested above.

As far as actually printing out results, it looks like I'm completely ignoring init containers, that could/should probably be fixed, but I'm not sure how to properly differentiate init containers without further complicating the output.

chilanti commented 2 years ago

@robscott - thanks for getting back to us. I've opened a request for enhancement against kubectl - I think that if we had a function that returned just the totals of the running containers, the tool could easily output two sets of summaries: 1) the "max" total that takes into account init containers 2) just the total of running containers I guess - for now I think we understand why the numbers are different.