Open Smith8154 opened 1 year ago
Information
I'm seeing the same behavior here.
After several days of normal dev work inside Docker, things start to fail because the host runs out of file handles. Even native macOS apps start crashing.
Comparing the output of lsof -Pn
inside the Docker VM (using this) and in the macOS host, one can see there are tens of thousands of files opened in the host by the Virtual Machine Service
that are not opened anymore by the Docker VM.
As a workaround, I just found that using gRPC FUSE
doesn't trigger this behavior. It's only with VirtioFS
when files remain open by the Virtual Machine Service
.
Docker Desktop 4.21.1 (which now uses VirtioFS
as default) shows the same behavior:
npm install
with a shared node_modules
.docker ps -a
shows 0 containers.Virtual Machine Service
keeps a handle to every file opened in step 1, and it will until the Docker VM is restarted.This has been hitting me too with VirtioFS enabled. Minimal case for recreation just involves touching or creating loads of files:
→ lsof +c0 -n | awk '{print $1}' | sort | uniq -c | grep com.apple.Virtualization
36 com.apple.Virtualization.Virtua
→ mkdir -p testfiles && docker run -v./testfiles:/testfiles --rm -it ubuntu bash
root@8eb8a09639f2:/# seq 1 100000 | split -l 1 -a 5 -d - testfiles/file
split: testfiles/file57256: Too many open files in system
root@8eb8a09639f2:/# exit
exit
→ lsof +c0 -n | awk '{print $1}' | sort | uniq -c | grep com.apple.Virtualization
51569 com.apple.Virtualization.Virtua
This results in lots of very random broken behaviour on the host.
I've gotten reports that this is a problem in 4.22.1 and 4.23.0 as well.
Makes VirtioFS practically unusable and actually negatively impacts the host machine after some time.
Just for those who stumple upon this: Current workaround is to switch from VirtioFS
to gRPC FUSE
I've submitted feedback to Apple and reported it to Apple support. https://developer.apple.com/forums/thread/741572
Reporting in with Docker 25.0.3 on M1 Pro Mac with Sonoma 14.2.1. This is still a problem. Anything on this from the Docker Mac maintainers?
Same issue with Docker@4.27.2
Macbook M1 Pro Sonoma 14.3.1
I'm facing the same problem with
m1 max/ mac os 14.4.1 (23E224)
docker version 4.30.0 (149282) with VirtioFS settings
Docker 4.32.0 on m1 max macbook pro still has this problem
Hi, I'm on
uname -a
: Darwin MacBook-Pro.local 23.5.0 Darwin Kernel Version 23.5.0: Wed May 1 20:17:33 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6031 arm64
Still having the issue. Switched to gRPC. Solves the problem but is way slower.
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: Mac15,11
Model Number: MRW33ZE/A
Chip: Apple M3 Max
Total Number of Cores: 14 (10 performance and 4 efficiency)
Memory: 36 GB
System Firmware Version: 10151.121.1
OS Loader Version: 10151.121.1
Theres only one way to fix this at the moment. Take your IT dollars and switch to Linux desktops, it's what we are doing. Docker on Linux does not suffer from this issue and you don't need Docker Desktop to boot! If we stop giving Docker and Apple our money they will eventually listen and fix Docker on Mac once and for all.
Pretty sure this is not a problem of Docker but of VirtioFS. In combination with the comically low default file descriptor limit in MacOS.
If you don't want to switch your whole dev platform just because of this issue you can always up the file limit manually, e.g.
sudo launchctl limit maxfiles 65536 1048576
Been running with this for nearly half a year now without any problems.
Needs System Integrity Protection to be disabled though, so maybe not everyone's cup of tea.
To persist you can edit the values in /Library/LaunchDaemons/limit.maxfiles.plist
.
You can check the currently effective limit with launchctl limit maxfiles
.
That is a bandaid at best, and depending on what you are running, this will only buy you a bit of time before you run into the limit once again. In my case, upping the file limit took it from breaking within 2 minutes, to breaking in about 10 minutes. Not saying it's not worth pointing out, but this really needs to be addressed by the Docker team. At this point, I have given up hope that the Docker team cares about this issue at all, considering they haven't replied to this issue since it was opened over a year ago.
Of course it's a workaround, but it's running stable for me for months with multiple heavy yarn/npm operations in Docker volumes daily.
And of course having a real solution would be preferable. Until that happens we can give up, use bandaids or even sledgehammers (like using a whole different platform altogether). Endless possibilities. 😁
It's not Dockers problem but have they been using their relationship with Apple to fix it? Has it been a topic of conversation on any of their meetings? They are charging us for this software that doesn't work well they should at least try to work with Apple to fix it.
@bsousaa any updates on this issue?
still...happening...
Yep. Makes using VirtioFS impossible. Really sucks.
Expected behavior
Docker should release the file after it is no longer needed.
Actual behavior
Docker appears to be holding on to files even after the container accessing the file is stopped. The only way to release the file is to restart the Docker engine.
I am passing a network share through to my Plex docker container, but after a short time of the container running, I begin to see these issues on my macOS host:
fts_read: Too many open files
. I have increased the open file limit by following this guide. When I check the file limits usinglaunchctl limit maxfiles
, this is the output:maxfiles 524288 524288
. Using Activity Monitor to check what files the Virtual Machine Service has open, this is what I see when no containers have been started:After starting my Plex container for a few minutes and then stopping it, I see that the Virtual Machine Service has 1,010 files opened, with all of the opened files being the Plex configuration files and media files on the network volume, despite no containers running. Below is a snippet of lines 855-906 of the open files. Again, no container are running. The only way to release the lock on these files is to restart the Docker service.
Information
Output of
/Applications/Docker.app/Contents/MacOS/com.docker.diagnose check
Steps to reproduce the behavior