Closed stephenc closed 3 years ago
How do you disable Kubernetes?
Settings --> Kubernetes --> Uncheck Enable Kubernetes
And is it enabled by default when you install Docker, or do you enable it manually first?
On OSX it was not enabled by default
On OSX it was not enabled by default
So, if I'm on OSX, this cannot be the cause of my problem, as I never enabled Kubernetes.
I'm on macOS as well, I enabled it myself since I use k8s most of the time. ᐧ
On Mon, 27 May 2019 at 19:04, Mouse notifications@github.com wrote:
On OSX it was not enabled by default
So, if I'm on OSX, this cannot be the cause of my problem, as I never enabled Kubernetes.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/docker/for-mac/issues/3499?email_source=notifications&email_token=AADWVRIZNQN6YOQPYJVTXZTPXQPDBA5CNFSM4GTSEEEKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWKJPEQ#issuecomment-496277394, or mute the thread https://github.com/notifications/unsubscribe-auth/AADWVROK4KH4O7M5EQYG5S3PXQPDBANCNFSM4GTSEEEA .
-- Alvise Susmel
Code poet at Poeticoding Ltd. +44 (0) 7546 116951 www.poeticoding.com
Can confirm. My CPU usage was through the roof, and disabling my local k8s cluster has things looking normal again.
$ docker --version
Docker version 18.09.2, build 6247962
K8s version: v1.10.11
same issue when:
no issue when:
Considering above, I believe that issue is linked to the volumes mapping feature. Probably docker is synchronising the underlying volumes [key -v /host/path:/container/path] too fast??
Up to my understanding this problem is exclusive feature of Mac. This link explains some development efforts https://docs.docker.com/docker-for-mac/osxfs/
Can confirm the sudden CPU pegging issue on my setup.
Using docker-compose
running various combinations of stacks (nginx/python/rust/go/php etc).
It intermittently happens, killing the current stack (docker-compose down
ctrl+c
) and utilization usually drops (i'd say 8/10... I've had some cases I had to kill the docker-for-mac client )
Here's my info
Using *.raw Kubernetes: Off CPUs 4, Mem: 4gb, Swap: 1gb Mojave MBP 2019 15" Screencap with my machine and client versions: https://imgur.com/RWTOYdT
To add, I also dev the same stacks/code-bases on an X99 (5930k) running docker-ce on ubuntu bionic and have yet to get the CPU pegging issue.
UPDATE 2019-06-22
I've gone back to using docker-sync
for my read/write attached volumes (which in the current application I'm dev'ing on was approx. 5 vols to 5of8 containers) and have had no issues with runaway/pegged CPU since. Will try and look into if volumes were the issue (APFS is a pretty junk FS). I'd suggest for anyone to try using docker-sync
for their volumes (NOTE: not compatible on linux so have to maintain a separate docker-compose yaml for mac + ubuntu ) (I'd also suggest using RVM so you don't have to install any global ruby packages to use docker-sync
)
I'll update in a few days...
UPDATE: 2019-06-24 I've tried the same app/setup on my 2014 mbp and seeing same/as-good results as the 2019 mbp. Dev'd for approx. 3 hrs with no issue (previous i'd usually see issues every 10-30min) Screencap with info: https://imgur.com/rZKzn04 I'll update in a few days... (when i get some downtime i'll also see if i can profile the FS to see if volumes are causing the runaway/pegged cpu)
UPDATE: 2019-07-11
No problems after changing to docker-sync VS volumes mounted directly from macos file system.
I've noticed that when I do use volumes mounted directly from macos and there is any moderate amount of IO (IE: compiling code or changing git branches that add/remove 40+ files) that 9/10 hyperkit starts pinning my CPU until I kill the containers with the mounted volumes.
UPDATE: 2019-07-12
I noticed that any moderate amount of IO even outside of the context of a volume but in the "FIle Sharing" paths of your docker configuration causes hyperkit to peg the CPU. IE: I have my /home/{user}/Projects
as my "File Sharing" path in the docker config, I did not have any volumes mounted and unzipped a directory with about 15000 files in my projects path and that was causing the hyperkit CPU pegging.. I was able to replicate it multiple times
Tried multiple suggestions from this Thread. Something has seemingly worked, as hyperkit is at 30% from 300% now. I did the following:
Suggestions from @peterataylor's link, especially
Removed the Docker.raw as suggested by @weburnit
(Kubernetes was disabled the whole time on my machine)
Thanks @borttrob - this is a PITA for me right now. Will try your suggestions.
I have the same issue, unfortunately the only thing that helps is "kill -9". I noticed that this happens when VMWare workstation is running at the same time with Docker in same host. Could be that docker had allocated 4 CPUs and my VM (Ubuntu) had also 4 CPUS leaving nothing available.
UPDATE (16-08-2019) : It seems this issue is not happening to me anymore. Engine: 19.03.1 Compose: 1.24.1 Kubernetes: v1.14.3
There are two other issues that seem to be similar to this at https://github.com/docker/for-mac/issues/2582 and https://github.com/docker/for-mac/issues/1759
Edit: Web of issues: Possibly related to: https://github.com/docker/for-mac/issues/1759 https://github.com/docker/for-mac/issues/2582 https://github.com/docker/for-mac/issues/2926 https://github.com/docker/for-mac/issues/3499
MacBook Pro 15' from 2019: What helped me:
MacBook Pro 15' from 2019: What helped me:
- disable kubernetes
- stop docker
- delete Docker.raw (/Users/username/Library/Containers/com.docker.docker/Data/vms/0/Docker.raw)
- https://markshust.com/2018/01/30/performance-tuning-docker-mac/
- start docker
For me, simply choosing - "Reset" -> "Reset disk image" in the Docker for mac GUI resolved the issue. I'm running 2.0.5.0 (35318).
I was seeing 380% CPU usage and the frequent slowdown of desktop applications.
Looked inside the VM - top three processes were using respectively, 23, 10, and 11 % of VM CPU - they were kube-apiserver
, kube-controller-manager
, and etc
- so it wasn't a container I'd deployed chewing up resources.
Seems like disk storage is an area containing some pervasive and persistent bugs.
Not sure if I'm helping, but what I had here was:
I stopped K8s and all processes/cores back to 1-5%, docker-compose super fast
It's not really a solution but it helps with the space heater issue in my case!
I have had the same problem, finally switched to docker-sync, little setup needed, CPU usage for my docker-compose app went from 450% to 30% using even more cores.
Ran into the same problem, too.
.raw
type.My Solution was: quit docker, force quick hyperkit within the process manager, then restart Docker.
Hyperkit showed running for 20+ days, suggesting it buggered up before my reset and never successfully stopped itself. Force quit allowed it to stop.
This is not a solution. It's just a temporary solution worked for me.
I'm adding my +1 to this issue as I've seen it as well.
thanks @whoisstan http://docker-sync.io/ helped me as well :)
I followed the instructions starting at the point where I needed to install Xcode because I had some ruby problems after running sudo gem install docker-sync
. Namely
mkmf.rb can't find header files for ruby at /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/include/ruby.h
I opened Xcode (which had to install something) and then went to preferences -> Locations -> Command-Line-Tools. The instructions I found were here: https://stackoverflow.com/a/56473855/170881
I was then able to run docker-sync start
after I followed the instructions on https://docker-sync.readthedocs.io/en/latest/getting-started/configuration.html
I had to create a docker-sync.yml
file and fill it with the images that are mounted. For example the service services/matching
had now a name of matching-sync
in that config.
I then had to update the docker-compose.yml
to mount the volume from an “external” volume:
The syncing of the files seem too slow, after I added all the services. I extended the docker-sync.yml configuration to exclude certain directories with a lot of files like node_modules and such.
And now I had a CPU load ~19% with Docker Desktop 2.0.0.3 stable with 6 CPUs and 8Gb memory and Swap set to 3Gb on a 2.9 Ghz i7 with 16G RAM
+1. Latest macOS (just updated) 10.14.6 - and Docker 2.0.0.3 (31259).
+1 Latest macOS (just updated) 10.14.6 - and Docker 2.0.0.3 (31259).
I confirm what @guice said https://github.com/docker/for-mac/issues/3499#issuecomment-512976590 it is hyperkit. I notice even when I use the hyperkit which comes with docker from minikube similar results, so have only been starting minikube when I need to actually work with something. Luckily for me it is only consuming 30 to 40% when nothing running other than hyperkit but still high for no running pods.
I finally managed to tame my Docker desktop app. Hyperkit used to take up around 100 - 300% of CPU and has now been brought down to spikes of 60% only when actively browsing pages.
What helped me - but I'm sure it'll be different for everyone (as it has been for me with all the proposed 'solutions' so far).
By default, Docker for Mac comes with /Users, /Volumes, /private, and /tmp directories available to bind mount into Docker containers. Remove all of these.
To do this, go to Docker > Preferences > File Sharing and removed said directories
Kudos for the solution go to this page (I only used the quoted portion, but anyone else may benefit)
As this seems to be helping some people I'm adding some other sidenotes:
Hyperkit will no longer be always at high levels (I had it sitting at 300% sometimes and be around or over 100% all the time , even when I wasn't using my docker image. This was slowing down my macbook dramatically and killing battery life.
Now it's just using CPU resources (90% or below) when actually using the container (visiting the pages), but it will die down again when not using the pages.
@belgianwolfie 's fix is not an actual fix if you need to bind mount a directory inside of a container that resides within one of those file sharing directories.
@belgianwolfie unfortunately this doesn't work for me, i get:
Cannot change shared directories Cannot read response
Ah seems my docker UI was in a bad state. After quitting and starting docker again i was able to remove the directories.
I was getting this when running a Node app and a MongoDB instance with docker-compose. Changing the volumes to use a delegated configuration worked for me and resulted in a drastic drop in CPU usage. See this article: https://docs.docker.com/docker-for-mac/osxfs-caching/#delegated
This is my docker-compose.yml after the change, note the :delegated after the volume binding:
version: '3.4'
services:
mongo:
image: mongo:3.4-jessie
volumes:
- ./data:/data/db:delegated
ports:
- "27017:27017"
web:
build:
context: .
target: dev
volumes:
- .:/code:delegated
ports:
- "3000:3000"
- "9229:9229"
depends_on:
- mongo
I was worried whether this might affect my workflow with nodemon for restarting after file changes, but it's all working fine!
hi kyle,
i have a similar setup for development to yours, except that I have 5 node components, not just 'web'. even with using delegated on the fastest macbook pro i easily go north of 500% of cpu usage, with docker-sync it's less then 50%, of course docker-sync has a whole bunch of sync problems as well.I am using a shell script to manually clean and start docker-sync, seems clumsy but still better then the CPU being on a constant high speed sprint.
fswatch app/ | while read num ; do docker-sync clean && docker-sync start done
It's all very annoying to say the least.
stan
On Thu, Aug 15, 2019 at 6:43 AM Kyle Meenehan notifications@github.com wrote:
I was getting this when running a Node app and a MongoDB instance with docker-compose. Changing the volumes to use a delegated configuration worked for me and resulted in a drastic drop in CPU usage. See this article: https://docs.docker.com/docker-for-mac/osxfs-caching/#delegated
This is my docker-compose.yml after the change, note the :delegated after the volume binding:
version: '3.4'
services: mongo: image: mongo:3.4-jessie volumes:
- ./data:/data/db:delegated ports:
- "27017:27017" web: build: context: . target: dev volumes:
- .:/code:delegated ports:
- "3000:3000"
- "9229:9229" depends_on:
- mongo
I was worried whether this might affect my workflow with nodemon for restarting after file changes, but it's all working fine!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker/for-mac/issues/3499?email_source=notifications&email_token=AABE3CHIXSZFIRQZ3NIQENTQEUXM5A5CNFSM4GTSEEEKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4LPQKQ#issuecomment-521599018, or mute the thread https://github.com/notifications/unsubscribe-auth/AABE3CFRLCEKSAK23YGOHSLQEUXM5ANCNFSM4GTSEEEA .
-- "Local color. Soak it up" Virginia Vidaura
In my case it was caused by a postgresql container with a shared data directory on the host. After eliminating this, the CPU usage went from a consistent >300% to a very satisfying <20%.
I think I may have found a reason for high CPU usage in certain cases: swapfile usage.
I have a docker-for-mac setup with Kubernetes (built in) and many pods on a Macbook Pro, and CPU was hitting 200% constantly and 400% sometimes (on 4 corse), making everything non-responsive.
I opened a shell on the VM itself (using docker run --privileged --pid=host -it alpine:3.8 nsenter -t 1 -m -u -n -i sh
) and ran top to see what was going on. I noticed 25% virtual CPU on the swap service which made me suspicious.
I then raised my docker's RAM to 8GB and dropped Swap to 512MB, and now CPU usage (while not totally gone) is more like 60% (out of 400%).
I don't know how swap is implemented in Hyperkit/Docker-for-mac, but as both swap and disk performance are sensitive issues, could this somehow be related?
I had this problem as well and it went away after ensuring that native bindings were installed. You can run a native bind install by running npm rebuild
(even if you use yarn)
@shadowbrush How you eliminated this?
@artemirq I'm using a postgres database, and linking its data dir to a host dir caused the CPU usage. Excerpt from docker-compose.yml
:
High CPU (using linked data dir on host):
postgres:
...
volumes:
- ./data/db:/var/lib/postgresql/data
CPU issue resolved (using a docker volume):
postgres:
...
volumes:
- db:/var/lib/postgresql/data
...
volumes:
db:
At least that's one way to deal with what caused the CPU issue in my situation. I ended up running the database on the host entirely, since it's used by other docker-compose projects as well.
I have similar issue
Unfortunately, none if these worked for me. Still getting 100%+ CPU usage. Are there any plans for a permanent solution?
I’m starting to think the permanent solution is to uninstall Docker 🙄
I’m starting to think the permanent solution is to uninstall Docker 🙄
Ha we both know that's not going to happen, at least not on OSX - you're going to have to find a way to run Docker as part of your development environment without being driven insane.
Perhaps we're expecting too much of Docker for Mac. Maybe create a virtual environment with VMware?
With VMWare/Centos 7 I'm observing (iotop
) a write speed > 200M/s on an aftermarket SSD - it''s worth noting that a lot of that may be memory buffering, I haven't investigated...
(dd if=/dev/sda of=/dev/foo count=1000000000
)
Another issue with docker for mac is the lack of build secret support - it's painful to build stuff on OSX where you have to resort to all sorts of (and I bet we'll see vulns in the future) dumb hacks to share SSH or other credentials with your docker build.
Then again, do you want to buy VMware licenses for your team? Perhaps it's more secure anyway?
I'll mention at this point, the experience of running VirtualBox under OSX is underwhelming - you don't know what pain is until you're experiencing a kernel panic every 3 hours - maybe the quality has improved in the 3 years since I ditched it, I don't know.
What kind of IO performance do you see if you execute a similar microbenchmark inside a container running under Docker for Mac ? Would executing a command like that lock up the CPU?
At my job they are using docker and most developers are using Mac, so I'm having to support them. They are all seeing constant 700% CPU usage from docker. The containers themselves are all using 2-20%. Docker just isn't usable on Mac IMO.
My current fix is to open the Docker preferences/settings window in OSX. By just doing that my docker process usage is reduced drastically. If I close this window, the process usage increases to 190% again.
I can confirm that opening Docker preferences decreases CPU usage to 15%-25%. And closing preferences window increases CPU usage by Docker (not com.docker.hyperkit) back to ~200%.
Ok a solution! Win!
confirm @kjella 's "open settings" solution too
Not sure if trolling 🤔
Opening the setting does not help for me. Only running on Linux as opposed to Mac helps.
This is a nightmare! Mac users can get more attention from Valve which is bad by default.
Odd, for me this issue is almost completely gone after i removed those file sharing directories mentioned higher up here. Every now and then it still appears (like once every 2 weeks), but after I restart my docker sync stack it is gone again. Working with docker is super smooth on mac for me now.
If I run watchman
inside a container on a directory with more than one hundred or so files I get the 700% usage constantly from hyperkit. If I use chokidar to watch the exact same directory then I get 20%.
None of the proposed solutions in this issue work for me. I just can't use Watchman in docker mac.
Even when I'm not getting the 700% fs usage the filesystem under docker mac is just painfully slow. I can see why all the developers who use Mac at my job are reluctant to use docker. The developers using Linux have a significantly better experience.
@ohjames at the risk of going off topic, what do the Mac devs use instead?
@jeff-h right now they suffer. They wait for two minutes to test every single code change while the Linux Devs wait for less than a second. Although I'm finishing off a change to ts-node that caches every source file that was include outside of a bind mount. That way they will suffer a little less.
You guys are not using docker-sync? On the mac it's a must i think.
@ncri It slowed down our first load horrendously. Our new solution using a modified ts-node provides the same benefits on reload without the initial hit.
@ohjames oh, okay, that is odd, for me the first load you is very fast. Probably depends on the workload and also machine you are using. I run a docker-compose file with 9 services which start up quickly with docker-sync.
Summary
This is an issue with a lot of comments and rumors of fixes and confusion over what the exact issue underlying this bug is.
False leads
Here is what we know, because it can get lost in the comments.
Current hypothesis
There is an edge case in the filesystem synchronization code and when triggered it causes the receiver process in the Docker VM to go into an infinite loop and suck all the CPU cycles of one CPU core.
As of 2020-05-28, the leading candidate for this hypothesis is around this line of code: https://github.com/moby/hyperkit/blob/79c6a4d95e3f8a59f774eb66e3ea333a277292c6/src/lib/mirage_block_ocaml.ml#L422 and see this comment: https://github.com/docker/for-mac/issues/3499#issuecomment-623960890
Mitigations
Things that can cause the infinite loop seem to involve syncing of file system events between OS-X and the docker VM. The fewer file system shares you have, the less likely this is to occur. Similarly, if you can switch your filesystem mounts to
:cached
then that means there will be fewer notifications being sent back and forth between the Docker VM and OS-X, just less chance of the issue.The native Docker K8s implementation seems to involve a lot of this kind of sync, so you are more likely to trip over the issue.
Stuff I have found that makes life easier:
/Volumes
but once I removed that the frequency of occurrence was greatly reduced: I leave/private
and/tmp
because some of the projects I work on have tests that use TempDirs as docker volume mounts.Hope this helps.
P.S. Remember your actual containers that you are running may be causing their own CPU usage spikes... that is not what this bug is about... those CPU spikes are likely the result of bugs in your containers or perhaps how you have configured volume mounts causing your containers to see excessive file system changes. This bug is for Docker CPU spiking when not running any containers or pods at all
P.P.S. I have heard interesting things about using Multipass to run k3s on OS-X without docker at all: https://medium.com/@zhimin.wen/running-k3s-with-multipass-on-mac-fbd559966f7c but that would, AIUI, force me to use something like Kaniko to actually build docker images within the k3s and that gets really ugly really fast. My current recommendation is to use k3d as my usage patters of Docker with k3d is only spiking the CPU about once a week for me.
Original bug report
Expected behavior
com.docker.hyperkit
should not turn my Mac into a space heater with megawatt output when stopping all running containersActual behavior
com.docker.hyperkit
process jumps to use all available CPU resources multiple times per day, some times when only vaguely interacting with docker.All docker commands lock up, e.g.
docker ps
Information
Diagnostic logs
This seems relevant
A second log after restarting docker from previous one. Only docker commands executed in between restart and this lock-up:
And the logs:
Steps to reproduce the behavior