Closed samuela closed 4 years ago
The same issue on Linuix.
Environment: Ubuntu 18.04
Minikube version (use minikube version): v0.29.0
/etc/os-release
): VERSION="18.04.1 LTS (Bionic Beaver)"
cat ~/.minikube/machines/minikube/config.json | grep DriverName
): virtualbox
~/.minikube/machines/minikube/config.json | grep -i ISO
or minikube ssh cat /etc/VERSION
): v0.29.0What happened:
This is the consumption on 6700HQ CPU with brand new installation with out any pods.
With KVM2 driver I get the same result.
Same for me on Gentoo Linux. I have tried kvm2 and virtualbox driver but they both have an idle CPU load of ~50%. I had the same behavior with minikube 0.28.{0,2}.
Guess it needs to be profiled... Which Kubernetes version are you using ? v1.10.0
(default) ?
https://github.com/kubernetes/community/blob/master/contributors/devel/profiling.md
@afbjorklund Yeah, I'm running the default version.
DevOps guy here - this is preventing some of our devs from working locally.
I am running Arch personally (minikube v0.29) and get 10-30% spikes with my full application running which seems acceptable, but others (Ubuntu 18, minikube v0.30) are getting near-constant 40% usage with no pods live on both the kvm2 and virtualbox driver.
@ianseyer are you using --vm-driver none
on Arch ? and the others minikube.iso
I am using kvm2
.
I was playing around found that docker alone creates a ~30% load on my host system. What I did was stop the kubelet.service
and restart docker.service
so that all containers are gone. So it might not only be a kubernetes problem after all.
@corneliusweig I'm not sure about your system, but I just checked on my machine and I don't think that's what's going on. The resting docker CPU load is around ~6%.
macOS 10.14 MacBook Pro (13-inch, 2017, Two Thunderbolt 3 ports) 2.3 GHz Intel Core i5
I ran docker run -it --rm ubuntu
and let it sit for a minute. I'm using Docker for Mac with the hyperkit driver.
Minikube version (use minikube version): v0.30.0
OS: (e.g. from /etc/os-release): MacOS 10.14 VM Driver: (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox ISO version: (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v0.30.0 Install tools: n/a Others: n/a
Very interesting notice,
If you run minikube with default CPU setup (which for me is 2 CPU), you will get total consumption in IDLE ~30%, but if you change settings to 6 CPU, the you will get average consumption for ~70%. And if you use 1 CPU core it will be the ~25% in IDLE. Less cores -> less consumption. Paradox :)
And all this checks was done with no pods setup.
I set up kubernetes inside an LXC container. That way I have some isolation and no cost for the virtualization. It's not as easy to set up as minikube, but if somebody wants to try it out, I have published my notes here https://github.com/corneliusweig/kubernetes-lxd.
Same here on Linux, with spikes that reach 90% usage. Tried with vbox
and with kvm2
➜ ~ minikube version
minikube version: v0.30.0
➜ ~ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
➜ ~
I also experience high cpu usage on my MacBook Pro (macOS Mojave 10.14.2). It also causes high CPU temperature which makes my fans run at full speed, including the annoying noise.
Minikube v0.30.0 VirtualBox 5.2.22
Around 30% of constant cpu usage here on my Macbook (i7 2.7 Ghz) after a clean setup.
Minikube: 0.32.0
Virtualbox: 6.0.0
K8s version: v1.12.4 (default on this minikube version)
I'm seeing around 30-50% CPU from the relevant VBoxHeadless
process on the parent host and only 1-5% CPU visible in top within the minikube VM (minikube ssh
) - is it possible to upgrade vbox tools after deploying the VM to confirm it's not just an issue with missmatched versions there?
Note: I ask this as a question as I can't see any familiar linux package managers inside the VM and it's missing basic tools (bzip2, tar) to even start the installation.
Still not fixed ? I thought Kubernetes was mainstream 😏
Same here with minikube 0.33.1
and vbox 6.0
running on Mac OS 10.14
.
Kind of a showstopper for pushing my colleague's workflows into the direction of local Kubernetes...
Same with 0.33.1 on Windows + Hyper-V.
This is making development a pain. The CPU spikes cause functions/http requests to take a long time to execute (10+ seconds). And the spikes are very frequent.
Possibly related: https://github.com/kubernetes/kubernetes/issues/48948
For anyone else on Windows and possibly Mac. The official Docker Desktop has Kubernetes support built in now. I was able to move away from minikube to it and all the CPU issues went away.
It's as simple as enabling Kubernetes in Docker settings, everything else seems to work just as well as with minikube.
@andreialecu i'm using minikube and docker-for-mac too and i have save cpu issues on both sides
@antonmarin What I meant is that docker has built-in kubernetes so you do not need minikube any more. See https://docs.docker.com/docker-for-mac/#kubernetes
Also, I had additional issues due to a port conflict on port 80 due to two services trying to open the same port - this resulted in some infinite loop inside the VM. (in my case it was Rancher causing it, plus a microservices framework I was developing with both contesting the same port)
same issue is still discoverable with macos 10.14.3 && docker desktop 2.0.0.3 (31259) && minikube 0.35.0
Same issue. OSX 10.14.4
, Docker-For-Mac Version 2.0.0.3 (31259)
, using built-in k8s docker-for-desktop.
@andreialecu -- I just checked this on Windows + Hyper-V and as per the metrics I can see in Task Manager, at rest, a fresh minikube cluster hovers around 5% ~ 12%.
same issue with macOS 10.14.6
and minikube version: v1.3.0
running microk8s consumes half the CPU that minikube takes on my mac. I believe it is a kube issue, but microk8s helps a bit for now.
with version 1.3.1 IDLE load drops to 25-30%
Minikube: 1.5.0 No resources, just after a fresh install and i let it sit for half an hour: 50% CPU / process core
Similar here.
I might be wrong, but high load seems to be caused by running quite often kubectl process similar to this one.
$ ps -ef | grep kubectl root 12611 9179 23 10:37 ? 00:00:00 /usr/local/bin/kubectl apply -f /etc/kubernetes/addons -l kubernetes.io/cluster-service!=true,addonmanager.kubernetes.io/mode=Reconcile --prune=true --prune-whitelist core/v1/ConfigMap --prune-whitelist core/v1/Endpoints --prune-whitelist core/v1/Namespace --prune-whitelist core/v1/PersistentVolumeClaim --prune-whitelist core/v1/PersistentVolume --prune-whitelist core/v1/Pod --prune-whitelist core/v1/ReplicationController --prune-whitelist core/v1/Secret --prune-whitelist core/v1/Service --prune-whitelist batch/v1/Job --prune-whitelist batch/v1beta1/CronJob --prune-whitelist apps/v1/DaemonSet --prune-whitelist apps/v1/Deployment --prune-whitelist apps/v1/ReplicaSet --prune-whitelist apps/v1/StatefulSet --prune-whitelist extensions/v1beta1/Ingress --recursive
My setup: OS: Linux Mint 19.2 (Tina) Hypervisor: VirtualBox Version 6.0.10 r132072 Minikube: v1.5.1 Docker: 18.09.7
I see similar to @MilanMasek, it's as if minikube is in an infinite loop, invoking commands like this:
/usr/local/bin/kubectl apply -f /etc/kubernetes/addons -l kubernetes.io/cluster-service!=true,addonmanager.kubernetes.io/mode=Reconcile --prune=true --prune-whitelist core/v1/ConfigMap --prune-whitelist core/v1/Endpoints --prune-whitelist core/v1/Namespace --prune-whitelist core/v1/PersistentVolumeClaim --prune-whitelist core/v1/PersistentVolume --prune-whitelist core/v1/Pod --prune-whitelist core/v1/ReplicationController --prune-whitelist core/v1/Secret --prune-whitelist core/v1/Service --prune-whitelist batch/v1/Job --prune-whitelist batch/v1beta1/CronJob --prune-whitelist apps/v1/DaemonSet --prune-whitelist apps/v1/Deployment --prune-whitelist apps/v1/ReplicaSet --prune-whitelist apps/v1/StatefulSet --prune-whitelist extensions/v1beta1/Ingress --recursive
over and over and over and over again, and this is enough to keep it at constant 30% cpu usage. That said, that seems too high, the above may be what's generating the load, but I don't think the load its generating should be that high. Maybe kvm is slow on my machine or something.
The add on manager does seem like low hanging fruit to reduce CPU consumption. The reconcile loop is constant. Maybe stick a sleep 15
in there somewhere ;-)
As a workaround, what I did was minikube ssh
and then edit /etc/kubernetes/manifests/addon-manager.yaml.tmpl
, changing value for TEST_ADDON_CHECK_INTERVAL_SEC
to "60"
.
Then I did kubectl delete pod -n kube-system kube-addon-manager-minikube
, which caused minikube to restart my pod with a setting of 60 second retrying.
It still consumes CPU for ~3 seconds every minute, but that is something I can now live with and can use minikube
without noticing huge CPU spike.
@wojciechka this is interesting, what was your idle cpu usage before and after this change and the specs of your machine?
Just tried this on Fedora 31, but haven't done a scientific test. I think I can see the surges every 5-sec without the @wojciechka tweak, and see them go away (or diminish) after making the tweak. I still think that minikube is using more cpu than microk8s. Minikube (over microk8s) gives the easy ability to have multiple clusters that can be started / stopped with states saved via profiles.
I actually found my problem - it was swapping. The default of 2GB memory with no minikube options tuned was not enough for anything to happen, even creating a namespace, without swapping.
@wojciechka thanks for sharing this valuable workaround with us! Is there a way to configure minikube to always start up the cluster with this configuration? That would be awesome!
Here is some additional information on this.
Regarding where I am running this - I am running minikube on Linux. The host machine is a sLinux Debian 9 Stretch, with i3-2100 CPU (2 cores, 2 threads at 3.1 GHz). The machine has 32GB of RAM. The host machine runs some background tasks, but it is not under anything close to a heavy load - the load is below 0.5 when I do not have minikube started. For testing I do not run any workloads on minikube.
Minikube has 4 CPUs and 16GB of RAM assigned to it. I also do not run any workloads in minikube when I do tests.
When the TEST_ADDON_CHECK_INTERVAL_SEC
is set to 60, on the host I am seeing around 40% of a single core/thread (%CPU
reported by top
) being used by the VirtualBox process, when the cluster is not really doing anything. The load average reported from inside the VM is around 0.3 - which is definitely ok.
Output from top via minikube ssh
session:
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
2995 root 20 0 1962.9m 103.8m 7.2 0.6 33:08.40 S /var/lib/minikube/binaries/v1.14.3/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/b+
3420 root 20 0 465.7m 324.8m 4.6 2.0 22:55.80 S kube-apiserver --advertise-address=192.168.99.100 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minik+
2464 root 20 0 1507.4m 85.6m 3.2 0.5 19:49.17 S /usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /e+
3435 root 20 0 10.1g 65.4m 2.8 0.4 12:03.58 S etcd --advertise-client-urls=https://192.168.99.100:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --d+
3460 root 20 0 212.5m 101.8m 2.2 0.6 8:29.18 S kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/c+
6835 root 20 0 139.5m 32.1m 0.6 0.2 1:41.11 S /coredns -conf /etc/coredns/Corefile
5019 root 20 0 136.2m 31.7m 0.4 0.2 0:34.85 S /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=minikube
6575 root 20 0 139.5m 32.4m 0.4 0.2 1:42.63 S /coredns -conf /etc/coredns/Corefile
14213 docker 20 0 23.5m 2.6m 0.4 0.0 0:00.70 S sshd: docker@pts/0
10 root 20 0 0.0m 0.0m 0.2 0.0 1:10.12 R [rcu_sched]
1373 root 20 0 87.1m 32.0m 0.2 0.2 0:06.00 S /usr/lib/systemd/systemd-journald
2473 root 20 0 2623.0m 44.0m 0.2 0.3 2:46.85 S containerd --config /var/run/docker/containerd/containerd.toml --log-level info
3453 root 20 0 138.7m 37.0m 0.2 0.2 0:52.14 S kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
6560 root 20 0 435.1m 52.1m 0.2 0.3 0:12.63 S /storage-provisioner
12184 root 20 0 0.0m 0.0m 0.2 0.0 0:00.29 I [kworker/3:1-events]
When I change the setting to use the current minikube default of 5 seconds for TEST_ADDON_CHECK_INTERVAL_SEC
, I see that average CPU usage of the VM goes up to around 70-80% of a single core/thread (%CPU
reported by top
) being used by the VirtualBox process. The load average inside the VM is around 0.4-0.6, so it seems ok, but it is definitely higher.
Here's output from top in minikube ssh
with the interval set to 5 seconds:
32304 root 20 0 138.7m 64.3m 6.2 0.4 0:00.31 S /usr/local/bin/kubectl apply -f /etc/kubernetes/addons -l kubernetes.io/cluster-service!=true,addonmanager.kubernetes.io/mode=Reconcile +
3420 root 20 0 465.7m 324.8m 5.6 2.0 23:38.33 S kube-apiserver --advertise-address=192.168.99.100 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minik+
2995 root 20 0 1962.9m 103.8m 5.4 0.6 33:56.49 S /var/lib/minikube/binaries/v1.14.3/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/b+
3435 root 20 0 10.1g 66.6m 3.2 0.4 12:27.43 S etcd --advertise-client-urls=https://192.168.99.100:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --d+
2464 root 20 0 1507.4m 83.5m 2.6 0.5 20:14.77 S /usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /e+
3460 root 20 0 212.5m 101.8m 2.4 0.6 8:45.72 S kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/c+
2473 root 20 0 2623.0m 43.9m 0.8 0.3 2:49.71 S containerd --config /var/run/docker/containerd/containerd.toml --log-level info
6575 root 20 0 139.5m 32.6m 0.6 0.2 1:45.68 S /coredns -conf /etc/coredns/Corefile
3355 root 20 0 106.3m 6.7m 0.4 0.0 0:46.19 S containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/af42bf1f14cacabcd6e58392e+
6835 root 20 0 139.5m 32.1m 0.4 0.2 1:44.27 S /coredns -conf /etc/coredns/Corefile
1 root 20 0 40.9m 8.4m 0.2 0.1 3:58.69 S /sbin/init noembed norestore
9 root 20 0 0.0m 0.0m 0.2 0.0 0:05.60 S [ksoftirqd/0]
10 root 20 0 0.0m 0.0m 0.2 0.0 1:11.63 I [rcu_sched]
5475 root 20 0 136.2m 46.9m 0.2 0.3 0:55.57 S /go/bin/all-in-one-linux --log-level debug
14213 docker 20 0 23.5m 2.6m 0.2 0.0 0:01.19 S sshd: docker@pts/0
30307 root 20 0 17.6m 2.8m 0.2 0.0 0:00.23 S bash /opt/kube-addons.sh
30616 root 20 0 0.0m 0.0m 0.2 0.0 0:00.02 I [kworker/0:0-events]
While top does not report significant usage by any process, it seems higher.
I also see the kubectl apply
being invoked very often when running it with the current default of 5 second interval.
I have also tried to set the TEST_ADDON_CHECK_INTERVAL_SEC
to 30 and from what I have measured, this also drops CPU usage to around 40-50% of single core/thread (%CPU
reported by top
) as seen by the host.
The interval of 5 seconds was added by the following commit:
Update addon-manager to v9.0.2, disable master negotiation and increase reconcile frequency
Perhaps it would be a good idea to revert the change to 5 seconds, or at least change it to a slightly longer interval like 30 seconds?
This was added 2 months ago, so I suspect my reporting of the problem does not overlap with the original issues from October 2018.
I am not sure if this should be tracked under same issue or separate issue.
Did what @wojciechka proposed together with mem increase to 4 Gb from default 2 Gb (as suggested by @jroper). It drops CPU usage of my laptop (2c/4t) from 70-80% to 40-50% but it is does not solve a problem. CPU is still boosted from idle clocks and fan is working on high revs.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Since issue is marked as stale my solution was is to end up with microk8s in VBox. Works like a charm, no abnormal CPU consumption.
I went with KinD (kubernetes in docker). Highly recommended as it works well on linux and macos (haven't tested windows, although it is supported). Bonus is you can use it for CI/CD as well.
Similar story here, I went with microk8s or docker-for-mac, they don't have the high resting CPU usage.
/lifecycle frozen
FWIW, recent versions of minikube are much better. Idle CPU on my mac with Hyperkit is about 1/3 of a CPU (33%). I think the major culprit was the add on manager, which has been removed in 1.7.x
33% is still bad. I am having a constant 30% with the latest version of minikube on vmware. It was 50% with virtualbox.
It appears most of the CPU usage is kube-apiserver
responding to lease renewals, etcd being queried by kube-apiserver
, and kubelet
running its routine tasks. I attempted to adjust some config and had some small success.
Note: Kubelet doesn't allow adjusting its housekeeping period (which I suspect is an intensive task). See https://github.com/kubernetes/kubernetes/issues/89936
I'm posting this in the interest of sharing my experience but I ultimately gave up on using minikube. The CPU usage from Kubernetes on a macOS laptop is just too high to roll out to a team (even trying to ignore how wasteful xhyve is). Your success may vary and any improvements will disappear after restarting minikube.
These changes could potentially be applied with ~/.minikube/files/
(see adding files note) but I didn't attempt this. Similarly, these could be adjusted in kubeconfig.go and with custom Kubernetes configuration (e.g. --extra-config 'controller-manager.leader-elect=false'
and probably requires v1.9.2) but I didn't try these approaches.
/etc/kubernetes/manifests/kube-apiserver.yaml /etc/kubernetes/manifests/kube-controller-manager.yaml /etc/kubernetes/manifests/kube-scheduler.yaml /var/lib/kubelet/config.yaml
Minikube is ran in a single instance that should not have multiple controller-managers or schedulers. These elections cause a significant amount of CPU usage in the kube-apiserver. These could be adjusted to higher lease/renew/retry (e.g. 60s, 55s, 15s) if one wants elections to remain enabled. Add this to kube-controller-manager.yaml and kube-scheduler.yaml:
- --leader-elect=false
We are likely not running a significant amount of watches and I suspect this is an expensive operation. Add this to the kube-apiserver.yaml:
- --watch-cache=false
Kubelet has a few operations that it needs to do routinely to stay healthy as a node. These changes adjust those frequencies (and makes the controller-manager aware).
Add to kube-controller-manager.yaml:
Adjust these in the kubelet config.yaml and then restart kubelet with systemctl daemon-reload && systemctl restart kubelet
:
httpCheckFrequency: 30s
nodeStatusUpdateFrequency: 30s
syncFrequency: 60s
Add to kube-controller-manager.yaml:
- --horizontal-pod-autoscaler-sync-period=60s
- --pvclaimbinder-sync-period=60s
Hey everyone! We've made some overhead improvements in the past few months:
On my machine, with the hyperkit driver, resting CPU of an idle cluster has reduced by 52% to on average 18% of a core.
I'm going to close this issue since I think it's been addressed, but if anyone is still seeing high CPU usage with the latest minikube version, please comment and reopen this issue by including /reopen in your comment. Thank you!
Is upgrading to minikube version v1.12.3
enough to fix this?
I upgraded, restarted, and performance-wise I see no difference whatsoever.
Same here. How to make use of this fix?
@r3econ @sanarena yah, upgrading should be enough.
If you're still seeing high performance, could you open a new issue for it? We can track it there.
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Please provide the following details:
Environment: macOS 10.13.6
Minikube version (use
minikube version
): v0.29.0cat ~/.minikube/machines/minikube/config.json | grep DriverName
): hyperkitcat ~/.minikube/machines/minikube/config.json | grep -i ISO
orminikube ssh cat /etc/VERSION
): v0.29.0What happened: I just installed and set up a fresh minikube cluster. CPU usage is pegged at ~50% even though no pods have been launched and nothing is happening on the cluster. I've observed the same behavior across both hyperkit and VirtualBox.
I ran
minikube addons enable heapster
to get some insight into where all the CPU is going. It looks like kube-apiserver-minikube and kube-controller-manager-minikube are the primary offenders.What you expected to happen: I expected the CPU usage to fall to basically zero when at rest. I understand that this may just be the baseline CPU usage for some of these services (liveness checks, etc). But when running in minikube mode, it'd really be nice to slow down the CPU consumption so that we don't kill all of our laptop batteries.
How to reproduce it (as minimally and precisely as possible): Create a minikube cluster on macOS with the appropriate versions.
Output of
minikube logs
(if applicable): n/aAnything else do we need to know: n/a