projectcalico / bird

Calico's fork of the BIRD protocol stack
90 stars 86 forks source link

Bird CPU usage is almost always 100% #95

Open mithilarun opened 3 years ago

mithilarun commented 3 years ago

This is likely #77 all over again, but we're seeing the bird process run on 100% CPU almost always.

Expected Behavior

Bird should not consume the entire CPU to run.

Current Behavior

Bird CPU Usage

Possible Solution

We were able to lower to the CPU usage by editing /etc/calico/confd/config/bird.cfg by hand in the calico-node container and setting the following values:

protocol kernel {
  ....
  scan time 10;       # Scan kernel routing table every 2 seconds
}
....
protocol device {
  ...
  scan time 10;    # Scan interfaces every 2 seconds
}

These values are not set by confd and so I had to hand edit the file.

Steps to Reproduce (for bugs)

1. 2. 3. 4.

Context

Most calico-node pods in our K8s environment are not completely up:

# kubectl get pods -A  | grep calico-node | tail
kube-system              calico-node-tjfxz                                        0/1     Running            4          6d13h
kube-system              calico-node-tmq8d                                        0/1     Running            1          8d
kube-system              calico-node-ttq2v                                        0/1     Running            1          12d
kube-system              calico-node-txdgs                                        0/1     Running            3          7d
kube-system              calico-node-txs88                                        1/1     Running            1          4m39s
kube-system              calico-node-v4npm                                        0/1     Running            2          14d
kube-system              calico-node-v56lq                                        0/1     Running            2          7d7h
kube-system              calico-node-v7nfv                                        0/1     Running            33         14d
kube-system              calico-node-vggbt                                        0/1     Running            2          6d15h
kube-system              calico-node-zpvz2                                        1/1     Running            5          6d13h

Your Environment

# ip addr | wc -l
50794
# ip route | wc -l
380

We are using kube-proxy in ipvs mode due to iptables being inefficient.

mithilarun commented 3 years ago

Verified that we have the fix mentioned in https://github.com/projectcalico/confd/pull/314. That is not helping.

sh-4.4# grep 'interface' /etc/calico/confd/config/bird.cfg
# Watch interface up/down events.
  scan time 2;    # Scan interfaces every 2 seconds
  interface -"cali*", -"kube-ipvs*", "*"; # Exclude cali* and kube-ipvs* but
                                          # kube-ipvs0 interface. We exclude
                                          # kube-ipvs0 because this interface
shivendra-ntnx commented 2 years ago

Do we any workaround here?

mnaser commented 2 years ago

Something that helped us a bit is the patch above, but we're still seeing this heavily. When running tcpdump, I see a whole load of AF_NETLINK traffic.

dbfancier commented 2 years ago

we ran into the same problem in our production environment.

CPU usage of bird is usually around 30%, but occasionally spikes to 100% and stays there for a while.

We did a CPU hot spot analysis using perf and found that the CPU time was concentrated in the function if_find_by_name(about 86%) and if_find_by_index(about 11%).

so I send SIGUSR1 to bird for a dump. It shows that iface_list has 30000 ~ 40000 nodes. The index field of most nodes is 0 and flags include LINK-DOWN and SHUTDOWN, and MTU is 0.

These devices no longer exist on the host, but remain in iface_list. Our scenario is offline training, so many pods are created and deleted every day.

Now I rebuild the list using the extreme method of "kill bird"

I wonder if kif_scan() has a problem with the interface_list maintenance mechanism. We hope the community will help identify and fix the problem.

Thanks a lot.

mgleung commented 1 year ago

@mithilarun @shivendra-ntnx @mnaser any other details about your cluster setup you can share? I'm trying to see if this can be fixed by addressing https://github.com/projectcalico/bird/issues/102 or if we are looking at a separate issue.

mithilarun commented 1 year ago

@mgleung We had to tear the cluster down, but it looked quite similar to what @dbfancier reported here: https://github.com/projectcalico/bird/issues/95#issuecomment-1123096172

caseydavenport commented 1 year ago

This PR was merged to master recently: https://github.com/projectcalico/bird/pull/104

It looks like it has potential to fix this issue. We'll soak it and release it in v3.25 and hopefully we can close this then.

ialidzhikov commented 1 year ago

@caseydavenport , is there a chance to backport the fix to 3.24 and 3.23? When we can expect 3.25 to be released?

caseydavenport commented 1 year ago

Here are cherry-picks for v3.23 and v3.24:

v3.25 should be available by the end of the year.

dilyevsky commented 1 year ago

We were observing high Bird CPU usage/liveness probes failing on clusters with large number of services running with IPVS kube-proxy mode. What happens is kube-ipvs0 interface accumulates large number of addresses and it's getting picked up by kif_scan code (device protocol) -> ifa_update:

Screen Shot 2022-11-14 at 4 46 26 PM

I have a patch https://github.com/projectcalico/bird/commit/6680cc9a7773625f500d7121b229cb465ef7c0f5 that ignores address updates for DOWN interfaces in the kif_scan loop that seems to improve this corner case which I can open a PR for unless someone has a better solution how to tackle this.

caseydavenport commented 1 year ago

@dilyevsky what version are you running? I thought in modern versions we exclude that interface from the BIRD direct protocol:

https://github.com/projectcalico/calico/blob/master/confd/etc/calico/confd/templates/bird.cfg.template#L116

dilyevsky commented 1 year ago

@caseydavenport v3.19 but it looks to be in the latest too. You're right - it's excluded on the direct protocol but device still picking up all the interfaces and there's no interface option there it seems - bird complains of "syntax errror" when you try to add it. My thinking was if interface is in managed DOWN state there's no point in ingesting its addrs in device so lmk if that makes sense to you.

ialidzhikov commented 1 year ago

@caseydavenport thank you very much for the cherry-picks! Do you also plan to cut new patch releases for 3.23 and 3.24? Thank you in advance!

caseydavenport commented 1 year ago

My thinking was if interface is in managed DOWN state there's no point in ingesting its addrs in device so lmk if that makes sense to you.

This makes sense to me - I'd want to think about it a bit more to make sure there's no reason we'd want that for other cases. Maybe @neiljerram or @song-jiang would know.

Do you also plan to cut new patch releases for 3.23 and 3.24? Thank you in advance!

@mgleung is running releases at the moment, so he can chime in. I know things are a bit slow right now around the holidays so I doubt there will be another release this year, if I had to guess.

nelljerram commented 1 year ago

In the docs for BIRD 2, there is an interface option in the protocol device section, which suggests that it wouldn't be fundamentally problematic to skip some interfaces in the scan associated with the device protocol. In order to be more confident, it might help to track down when that was added to the BIRD 2 code, to see if there were other changes needed with that. In advance of that, the idea of skipping DOWN interfaces does feel like it should be safe.

For another approach, I tried reading our BIRD (1.6 based) code to understand the interface scanning more deeply, but it mutates global state and is not easy to follow - would need to schedule more time to follow that approach properly.

mgleung commented 1 year ago

@ialidzhikov We currently don't have any patch releases for v3.24 and v3.23 planned since we are focusing on getting v3.25 out at the moment. Sorry we're a little late on the releases at the moment.

ialidzhikov commented 1 year ago

@mgleung, thanks for sharing. The last patch releases for calico are beginning of November, 2022. It feels odd that the fixes are merged but we cannot consume them from the upstream. I hope that cutting the patch releases will be prioritised after cutting the v3.25 release. Thank you in advance!

mgleung commented 1 year ago

@ialidzhikov, thanks for the feedback. I can't make any promises about an exact timeline, but if these are sought after fixes, then that makes a compelling argument to cut the patch releases sooner rather than later.

ialidzhikov commented 1 year ago

@mgleung , we now see that 3.25 is released. Can you give ETA for the patch releases? Thanks in advance!

mgleung commented 1 year ago

@ialidzhikov if all goes well, I'm hoping to have it cut in the next couple of weeks.

mithilarun commented 1 year ago

@mgleung I see cherry-picks done for 3.23 and 3.24, but there isn't a release that we can consume yet. Do you have an ETA on when those might be available?

florianbeer commented 1 year ago

Just chiming in: we very likely have the same problem on a few of our clusters. All of them have a high number of pods being created and destroyed via Kubernetes Jobs.

Setting scan time for bird and bird6 does lower the CPU usage a bit and the readyness probe of the affected calico-node pods goes green again.

Versions:

# calico-node -v
v3.25.0
# bird --version
BIRD version v0.3.3+birdv1.6.8

Settings:

# sed -i 's/scan time 2\;/scan time 10\;/g' /etc/calico/confd/config/bird{,6}.cfg

# birdcl configure
BIRD v0.3.3+birdv1.6.8 ready.
Reading configuration from /etc/calico/confd/config/bird.cfg
Reconfigured

# birdcl6 configure
BIRD v0.3.3+birdv1.6.8 ready.
Reading configuration from /etc/calico/confd/config/bird6.cfg
Reconfigured
axel7born commented 11 months ago

The issue doesn't seem to be completely resolved by https://github.com/projectcalico/bird/pull/104. When creating and deleting a large number of pods in a cluster, we've noticed that the number of interfaces visible with dump interfaces gradually increases over time.

This issue can be easily reproduced by creating a Kubernetes job with a large number of completions. However, it does take some time, and only a fraction of the created pods results in a permanent increase in the number of internal interfaces.

Would it be sensible to remove all interfaces with the IF_SHUTDOWN flag by iterating over the interfaces?

Another suggestion could be to make the watchdog timeout in the bird config configurable via Calico, or set it to a reasonable default value (perhaps 10 seconds). This way, problematic processes would automatically be restarted.