projectcalico / canal

Policy based networking for cloud native applications
717 stars 100 forks source link

kubernetes.io/ingress-bandwidth doesn't seem to work reliably #142

Open mattwing opened 11 months ago

mattwing commented 11 months ago

Expected Behavior

Setting kubernetes.io/ingress-bandwidth to e.g. 35M should limit ingress bandwidth to 35M

Current Behavior

The bandwidth is sometimes limited to that amount, sometimes not

Possible Solution

Steps to Reproduce (for bugs)

Here's my networking config:

/etc/cni/net.d/10-canal.conflist

{
  "name": "k8s-pod-network",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "calico",
      "log_level": "info",
      "datastore_type": "kubernetes",
      "nodename": "<mynodehost>",
      "mtu": 1450,
      "ipam": {
          "type": "host-local",
          "ranges": [
              [
                  {
                      "subnet": "usePodCidr"
                  }
              ]
          ]
      },
      "policy": {
          "type": "k8s"
      },
      "kubernetes": {
          "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
      }
    },
    {
      "type": "portmap",
      "snat": true,
      "capabilities": {"portMappings": true}
    },
    {
      "type": "bandwidth",
      "capabilities": {"bandwidth": true}
    }
  ]
}

My pod spec:

apiVersion: v1
kind: Pod
metadata:
  name: curl-client-35m
  annotations:
    kubernetes.io/ingress-bandwidth: 35M
spec:
  securityContext:
    runAsNonRoot: true
  containers:
  - name: curl-client
    image: curlimages/curl:7.78.0
    command: ["sh", "-c", "curl -sSL -w 'Download speed: %{speed_download} bytes/sec\n'  https://a-large-file-i-can-download-from-my-pod -o /dev/null"]
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      seccompProfile:
        type: RuntimeDefault

But here's the output I'm getting (from kubectl logs pod/curl-client-35m):

[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 10257544 bytes/sec -> 82.060352 Mbps
[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 4171448 bytes/sec -> 33.371584 Mbps
[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 4417035 bytes/sec -> 35.33628 Mbps
[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 4808109 bytes/sec -> 38.464872 Mbps
[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 5672415 bytes/sec -> 45.37932 Mbps
[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 10938176 bytes/sec -> 87.505408 Mbps
[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 5281018 bytes/sec -> 42.248144 Mbps

It looks like sometimes this works as expected, since the middle speeds are all roughly 35Mbps. But the first and last few are all faster than 35Mbps, and I'm not sure why.

Context

I'm trying to limit the amount of bandwidth that can be used by my pods in a network-constrained environment.

Your Environment

mattwing commented 11 months ago

It's possible this is more likely a calico issue than a canal issue so I filed https://github.com/projectcalico/calico/issues/8187 as well