Open z1r0- opened 2 years ago
@z1r0- thanks for the report. Could you upload a set of diagnostics and a matching screenshot? I'd like to confirm exactly which process is leaking (and view the command-line etc)
@djs55 Sure. Diagnostics ID: 72F4E822-C849-4117-981D-06B38A63C1BA/20220106145448
Used docker cp to let it grow (like in the example above) and stopped/removed the container.
I can confirm that, let me know if you need any additional info/testing/logs.
Pitching in as I have to restart docker multiple times a day to work around these issues
I'm currently on the experimental build 4.6.0 (75045) running 12.2.1 on an M1 Pro with VirtioFS enabled, but the behavior is the same on the GA 4.5.0 version.
Diagnostics ID AF628F15-7B04-4806-BEE7-33B5827B55A8/20220309231313
75324 has been well behaved for me for the last few days (M1 and VirtIO FS enabled)
uh. completely missed, that there's a 75324 build. just downloaded it and made the docker cp test and the issue sadly is still there.
75324 has been well behaved for me for the last few days (M1 and VirtIO FS enabled)
It completely depends on what you are doing. On days where I only need to code it works without any issues and I can let it run for days. But when I need to import db-dumps > 15GB it's a huuuge issue. Started to copy the gziped dumps to the db container and then import them instead of using local psql client to reduce the issue but it still remains a problem.
Something else: I'm also using mutagen for better io-performance and somehow it seems not to be affected by this issue. It's only "docker cp" and "shared ports" that I identified so far.
M1 and VirtIO FS enabled. Docker version 4.6.0 (75818). A high disk IO app runs for 5 minutes docker process ends up using 6 GB of ram. This is the high disk IO container I was running lscr.io/linuxserver/transmission:arm64v8-latest
.
Checking in as this issue is still present on 4.6.1.
Docker process using 103GB (and swapping a lot of course).
Diagnostics ID: AF628F15-7B04-4806-BEE7-33B5827B55A8/20220331104836
Alsa without virtualization.framework!
I am having the same issue with Docker Desktop 4.7.0 (77141) Diagnostic ID: 79C10953-31E7-4308-9955-4FFEF6A3129F/20220418100515
What actually is Virtual Machine Service?
Having exactly the same issues on my m1, sadly.
Yup, still present on 4.8.1
This happens for me as well on Docker Desktop 4.8.2 on an M1 Mac running macOS 12.4.
If I enable Virtualization.framework and docker cp
or wget a 100mb file into the container, the Docker process will also grow its memory footprint by 100mb.
Seems to be related to network traffic - I was able to move the same file into the container via a bind mount without the same thing happening.
Still happening on 4.9.1
as well on an M1 Mac running macOS 12.4.
Any sort of network traffic seems to grow the memory usage over time. Running things like cp
or dd
do not grow the memory.
Still an issue on 4.10.0 for M1 Mac running macOS 12.4.
Still happening on 4.10.0 as well on an M1 Pro Mac running macOS 12.4
Very annoying issue, Intel Mac with macOs 12.4 affected as well on 4.10.0 (82025)
I see qemu-system-aarch64
as my highest memory usage process. Memory usage keeps growing over the course of the week like a memory leak. Is this the same issue?
M1 Macmini Mac OS 12.4
Docker 4.10.1 M1 Pro with 32GB MacOS 12.4 Docker itself does not use a lot of memory, but the swap goes up to 60gb and it goes down after Docker restart. CPU usage is also really high when this happens. Tested with and without the experimental features.
@djs55 Is there anything more needed for this issue to be acknowledged? Let us know if there's anything we can do - I have several people on my team struggling with this issue.
Docker 4.10.1 (82475) Engine: 20.10.17
Mac Studio M1 Ultra - 128 GB
I used to run PostgreSQL in docker as well, but docker would eventually crash. I now run PostgreSQL directly on the host.
Docker 4.10.1 M1 Air with 16GB MacOS 12.4
Got my ram filled up my laptop crashed and broken folder icon appeared... lost all my projects and documents.. worst experience ever!
Please fix ASAP.
Seems like disabling Virtualisation framework resolves the issue.
Don't know why I got 5 dislikes, but to resolve the issue right now you have to disable Virtualisation or restart docker once notice swap usage))
Without virtualisation framework qemu process always take 2,5GB, and there is no grow in memory usage
Don't know why I got 5 dislikes, but to resolve the issue right now you have to disable Virtualisation or restart docker once notice swap usage))
Without virtualisation framework qemu process always take 2,5GB, and there is no grow in memory usage
On Apple Silicon, the read performance on mounted volumes is absolutely trash without VirtioFS accelerated directory sharing, for which you need to enable the new Virtualization framework.
I haven’t noticed it, I’m just running redis and postges for my app, and it works pretty fine. But I’ll doublecheck it. Thanks
@artemijan You got downvoted because it was clearly stated in the original post that the issue happens when Virtualization.framework is enabled (there's several reasons people might be doing that, VirtioFS / disk performance probably being the most common)
Seeing high RAM usage on an Intel Mac running macOS Monterey 12.3.1 (21E258), this thread helped with debugging VM processes:
I can only guess that the virtual machine itself stuck in a state it couldn’t properly start but couldn’t stop either. Next time it happens try this:
cd ~/Library/Containers/com.docker.docker/Data/vms/0 nc -U console.sock
Then press ENTER to get the console prompt. Press CTRL+u to delete the special charcters appeared in the console like:
^[[51;19R[32
Run top tor other monitoring command to find what process uses the CPU. The console may continuously shows you error messages or warnings. If there aren’t too many and not relevant, press ENTER and ignore it. If there are too many you probably should deal with it. You can also try the debug shell:
cd ~/Library/Containers/com.docker.docker/Data nc -U debug-shell.sock
You can use it similarly to the console except it won’t show you error messages over the prompt. However, some commands works differently here like runc list but you can see the same processes. I think it just sees the resources and processes in the VM.
What ultimately helped is dropping Docker resources:
After it restarted, that freed up swap by ~10GB. Could probably also do sudo purge
, but try to avoid that. Virtual Machine Service is now consistently at 2.00 GB after stopping containers. Hyperkit is being used by minikube
, so irrelevant here.
Disabled gRPC FUSE, enabled VirtioFS, telemetry was already disabled. Not sure if the former two had much of an effect on CPU/RAM usage vs. IO which wasn't tested.
Would prefer to not hamstring containers on a fairly specc'd out Mac mini, but satisfied for now.
Same issue on my M1 max.
few moment later
All the data downloaded by the container is counted in the memory, remains counted in the real memory then is finally compressed by the operating system which tries to survive. It all ends up in the swap until the host machine crashes.
Copied from my other issue:
When the container writes into big file on a mounted path, memory usage of Docker keeps going up, and overshoots the limit set in 'Resources', effectively taking the entire Mac down until it's force closed
I've set 8GB memory limit in Resources. Container is mounting a few things:
volumes:
- ~/docker/config:/config:cached
- /Volumes/dockerroot:/data:delegated
The container is trying to write a 12gb file (downloaded / streamed) over time into /data
. As the file gets written, memory on the host shoots up drastically. To my naive eyes it looks like it's trying to keep the entire content in memory, instead of periodically flushing the state to disk.
Memory on the Docker process itself is going up, not on the virtual machine service
docker stats
is reporting nothing strange besides that NET I/O is very similar to memory used by the process
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
98b265435c76 mycontainer 8.79% 14.95MiB / 7.75GiB 0.19% 2.61GB / 75.8MB 0B / 336kB 14
Basically as previous posters said, this just climbs and climbs until the host starts randomly killing things. Glad I was able to finally see what's going on as this usually happened on my mac mini while I was sleeping 😅
Seems to be a serious memory issue with the new virtualization framework + virtuofs
I updated to macOS Ventura yesterday and retested it (with docker cp like described in my initial post) and the issue seems to be gone? Can anyone else confirm this?
Edit: Also just tried to restore a bigger database with local psql to postgres running inside container. NET I/O shows ~ 20GB for the db container, but no leaking processes to be found. 🎉
I really think it's fixed.
problem still exists for me. i updated to Ventura on M1, running 4.12.0 (85629). stopped after docker ate 40GB
Problem still present in 4.13.1 and Mac M1.
I really think it's fixed.
Unfortunately, I was wrong. The problem is still present :\
Not fixed here either, on Ventura 0.1. Still need to manually clean up Docker once in a while on my server because the memory usage is becoming too huge. So don't upgrade to Ventura hoping it'll make things better 😇
Yeah - can confirm what @dvcrn said.
I have same issue in a intel based macbook pro. after stop all contairnsers it's reaming using full memory.
Ventura 13.0 (22A380) M1 mac mini
The same problem
This will not solve the real problem but I have kind of worked around it for my needs temporarily with some cronjobs, so that I don’t have to daily monitor and manually restart the Docker app or the hosting computer when the memory of the Docker process stacks up above 80 gb or so. So just wanted to share, perhaps it saves someones frustration until the issue gets resolved.
Run crontab -e in Terminal and add:
0 16 osascript -e 'quit app "Docker Desktop"' 1 16 killall Docker 2 16 open -a Docker 0 4 osascript -e 'quit app "Docker Desktop"' 1 4 killall Docker 2 4 open -a Docker
This will restart Docker a couple of times each day, gracefully first if possible before killing the process a minute later. It adds some minutes down time each day but for me it’s worth it.
I have tried all three file systems in version 4.15.0 without any difference regarding the leak. Running latest stable Ventura on Mac Mini M1
My M1 mac mini was acting wonky today, and during investigation noticed Docker consuming 26GB of RAM with bunch of swapping happening. I just started to use Docker yesterday on this machine. Running macOS Ventura 13.0.1.
Restarting helped for now, but clearly this is a serious issue when running Docker on Mac.
I might've been too hasty with this comment. Container I ran had definitely some memory leak bug and since it got fixed I haven't seen so severe memory usage - at least so far.
Jumping in with a similar problem. I have just noticed the process Virtual Machine Service
always taking 8.01GB RAM (~3.5 Real Memory). Restarting docker kills the process, but when Docker starts, it goes to 8.01GB. This is before I even try to run a container.
macOS 13.1 Beta (22C5050e) docker desktop version v4.14.1
@dstoyanoff I think that is a different issue. Virtualization Framework allocates the full amount of memory that you specify for docker at start, but this issue is about the actual Docker process having a memory leak.
4.15.0 seems to have resolved it! Virtualization Framework is enabled by default. But if I enable VertioFS for file sharing Docker fails to start. Suppose you are stuck and unable to change the filesharing implementation. Try...
vi ~/Library/Group\ Containers/group.com.docker/settings.json
"useVirtualizationFrameworkVirtioFS": false,
M1 MacMini MacOS 12.6
Apologies false report. After leaving it for an hour it's now taking up 46GB of memory.
M1 MacMini MacOS 13.0.1 I ended up with ~2TB SSD Data Usage cuz of swap 😢
I'm also seeing this issue on my Mac mini - Intel
macOS 13.1 Docker Desktop version v4.15.0
same here, MBP M1 16GB swap used 27GB, my container is generating report files, with 13MB all of them, so not that much
macOS 13.1 Docker Desktop version v4.15.0
:-(
on MAC M1 pro and have the same issue . it uses ~8 GB out of 16 gb and never comes down even if I stop all of my docker containers
MacOS 13.1
MacBook M1 Pro
docker desktop 4.15
docker engine : 20.10.21
With Virtualization.framework enabled the Docker process (not Virtual Machine Service) grows in memory-usage and doesn't free it up properly. After some testing I noticed, that it seems to be related to transfers between host and docker-container.
Easiest example: copy a 1GB file using "docker cp" and the memory-usage will grow by 1GB. But happens with other stuff too (like importing a db with local client over a host-bind port).
Example:
I've seen the discussion about "Memory" and "Real Mem" in Activity Monitor and first thought "okay, Real Mem returns to normal after some time so it will be fine" but then I noticed, that MacOS starts swapping when "Memory" value gets to big even "Real Mem" is just some MB.
After restarting Docker (from docker context menu) everything goes back to normal again.. So I'm currently restarting Docker every time memory usage gets to huge.. I've usually 20+GB of free memory and no swap usage if I don't forget to restart Docker, so it's deff it.
I'm using the most recent version: Docker Desktop 4.3.2 (72729) and MacOS 12.2 Beta (but I had same behaviour with 12.0, 12.1).