Closed Nereg closed 8 months ago
I believe I may be experiencing the same problem while trying to "docker push" some images
One workaround I use is to use docker from WSL directly. It is not that hard and already has all the drives mapped.
Docker Desktop is slow and still buggy (network and bind mount) on Windows. It’s not all Dockers fault. WSL2 with native Linux support is actually very stable and performant.
One workaround I use is to use docker from WSL directly. It is not that hard and already has all the drives mapped.
@Nereg would you kindly elaborate on the process and setup that you use for a workaround?
Sure. I just use WSL to run the docker. It is pretty integrated already into windows so I can just type wsl
where my project and it opens a shell where I can use docker! Files are already there, and you can access the WSL machine just with localhost
address.
So to get docker running in WSL you need to:
1) Install a distro into WSL (I chose Ubuntu)
2) Install docker there
3) Done!
@Nereg, I too am using WSL to invoke the docker commands. Can you please clarify if the docker version you are using in your WSL workaround is the Docker Engine and NOT Docker Desktop (see link and link)
If so, that would make sense to me as I am using the WSL2 integration with Docker Desktop and experiencing the problem.
I encountered the same situation, soaring to 10GB at the highest point
Docker Desktop Version : 4.21.1 (114176)
I had the same situation, 32GB RAM was used and Docker failed due to virtual memory overflow. Docker Desktop 4.25.0 Docker version 24.0.6,
I'm having the same issue with latest Docker Desktop versions, the backend exe went up to 20GB while my vmmem fixed at 4GB which I specified in the .wslconfig file. I'm thinking the process internally buffer up something never released.
In AppData\Local\Docker\log\vm\init.log
I saw numerous following lines:
[2024-01-11T00:41:01.035614871Z][init][I] [2024-01-11T00:41:01.035576153Z][init.socketforward][I] error copying: read unix /run/host-services/docker.proxy.sock->@: read: connection reset by peer
[2024-01-11T00:41:01.280894525Z][init][I] [2024-01-11T00:41:01.280814949Z][init.socketforward][I] error copying: write unix /run/host-services/docker.proxy.sock->@: write: broken pipe
[2024-01-11T00:41:01.281192403Z][init][I] [2024-01-11T00:41:01.281117076Z][init.socketforward][I] error copying: write unix /run/host-services/docker.proxy.sock->@: write: broken pipe
[2024-01-11T00:41:01.281554706Z][init][I] [2024-01-11T00:41:01.281527848Z][init.socketforward][I] error copying: read unix /run/host-services/docker.proxy.sock->@: read: connection reset by peer
They're repeatedly logged about 20 lines in 10 seconds periodically. I'm thinking something running periodically which leaks.
Encountering the same problem, the backend will consume more than ten gigabytes of memory after a period of time, especially when using the "Docker pull" command
same story here, docker desktop backend consumes a lot of memory. Definitively too much
Not just being high, the fact that it's ever growing in the consumption hints it's likely something leaking/not releasing.
Tried Docker Desktop for Windows v.4.27.1 with 40 Linux based containers with rather big internet traffic. com.docker.backend.exe RAM usage increase over time:
Docker was restarted by monitoring software when com.docker.backend.exe RAM usage exceeded 8 GB.
Tried Docker Desktop for Windows v.4.27.1 with 40 Linux based containers with rather big internet traffic. com.docker.backend.exe RAM usage increase over time:
- right after start ~ 500 MB
- in 4 hours ~ 4 GB
- in 8 hours ~ 8 GB
Docker was restarted by monitoring software when com.docker.backend.exe RAM usage exceeded 8 GB.
What monitoring software are you using? I also want Docker Desktop to automatically restart after a period of time
What monitoring software are you using? I also want Docker Desktop to automatically restart after a period of time
Custom in-house made solution.
Docker Desktop for Windows v.4.27.1 50 Linux based containers with rather big internet traffic com.docker.backend.exe RAM usage: 8 GB in ~ 6 hours
I disabled the "SBOM indexing" feature and that significantly decreased the memory usage of "Docker Desktop Backend" (I guess there was some indexing in progress after an image has been rebuilt, that made my computer completly unusable) and I does not seem to increase over time anymore neither
I disabled the "SBOM indexing" feature and that significantly decreased the memory usage of "Docker Desktop Backend" (I guess there was some indexing in progress after an image has been rebuilt, that made my computer completly unusable) and I does not seem to increase over time anymore neither
Thanks, yet I tried on my side disabling it, the memory doesn't help and the memory still grows.
I can confirm the same issue here. I set a limit of 6GB in the .wslconfig, but the Docker backend continues to use double that or more. Took this from Process Explorer. Might look at the backend with Process Monitor to see if there's anything, but yeah. This is definitely an issue.
From process monitor. Just a whole lot of this:
I went into Docker Desktop's settings, disabled the SBOM indexing and RAM usage dropped to less than 1.5 GB. This did include a restart of the engine but it appears to be holding steady. Will continue to monitor.
I can confirm the same issue here. I set a limit of 6GB in the .wslconfig, but the Docker backend continues to use double that or more. Took this from Process Explorer. Might look at the backend with Process Monitor to see if there's anything, but yeah. This is definitely an issue.
From process monitor. Just a whole lot of this:
I went into Docker Desktop's settings, disabled the SBOM indexing and RAM usage dropped to less than 1.5 GB. This did include a restart of the engine but it appears to be holding steady. Will continue to monitor.
I've been monitor this thread for a few months now and I think the recent update (4.27.2) did something, but I think it's not fully fixed yet.
Before I tried disabling SBOM indexing too, but has no effect, com.docker.backend.exe still acts as if it has memory leaks. Then the latest update got pushed and now it's still leaking, but it's way slower now. I'm not even sure if it's leaking at all since it's too low.
Before, when I docker starts running, just around 4 hours it will be around 3 to 4 GB of RAM. Now since I applied the update, after around 8 hours, it's still under 200MB. SBOM indexing is enabled too. I hope this is not placebo on my end. But I really wish this issue gets fixed. I only run 1 container, to self host a ZeroTier controller, consuming almost 8GB of RAM with just that container is insane.
I can confirm the same issue here. I set a limit of 6GB in the .wslconfig, but the Docker backend continues to use double that or more. Took this from Process Explorer. Might look at the backend with Process Monitor to see if there's anything, but yeah. This is definitely an issue. From process monitor. Just a whole lot of this: I went into Docker Desktop's settings, disabled the SBOM indexing and RAM usage dropped to less than 1.5 GB. This did include a restart of the engine but it appears to be holding steady. Will continue to monitor.
I've been monitor this thread for a few months now and I think the recent update (4.27.2) did something, but I think it's not fully fixed yet.
Before I tried disabling SBOM indexing too, but has no effect, com.docker.backend.exe still acts as if it has memory leaks. Then the latest update got pushed and now it's still leaking, but it's way slower now. I'm not even sure if it's leaking at all since it's too low.
Before, when I docker starts running, just around 4 hours it will be around 3 to 4 GB of RAM. Now since I applied the update, after around 8 hours, it's still under 200MB. SBOM indexing is enabled too. I hope this is not placebo on my end. But I really wish this issue gets fixed. I only run 1 container, to self host a ZeroTier controller, consuming almost 8GB of RAM with just that container is insane.
Good info to know. I've been watching too and it does seem like the memory usage is creeping back up again. I don't really know what else to do to make this stop and it just needs to be fixed by Docker. I have 32 GB ram in this PC, 2 GB is used for Home Assistant, 6 GB is the max for WSL2 and then Docker just takes up whatever it wants until it's almost maxed out. I do not want to get to the point where I feel like I need to restart the computer on a schedule just to clear it. It's actually crazy that this issue has been going on as long as it has, especially with all the reports.
I do not want to get to the point where I feel like I need to restart the computer on a schedule just to clear it.
wsl --shutdown brings the memory usage back down for me. You may not need to reboot, at least.
I do not want to get to the point where I feel like I need to restart the computer on a schedule just to clear it.
wsl --shutdown brings the memory usage back down for me. You may not need to reboot, at least.
Yeah that causes Docker to come up saying that WSL has stopped running which requires manual interaction to restart it. If that also causes the Docker Engine to restart, then that would also bring the memory usage down. It would be easier and less interactive to just restart the whole PC.
I can confirm the same issue here. I set a limit of 6GB in the .wslconfig, but the Docker backend continues to use double that or more. Took this from Process Explorer. Might look at the backend with Process Monitor to see if there's anything, but yeah. This is definitely an issue. From process monitor. Just a whole lot of this: I went into Docker Desktop's settings, disabled the SBOM indexing and RAM usage dropped to less than 1.5 GB. This did include a restart of the engine but it appears to be holding steady. Will continue to monitor.
I've been monitor this thread for a few months now and I think the recent update (4.27.2) did something, but I think it's not fully fixed yet. Before I tried disabling SBOM indexing too, but has no effect, com.docker.backend.exe still acts as if it has memory leaks. Then the latest update got pushed and now it's still leaking, but it's way slower now. I'm not even sure if it's leaking at all since it's too low. Before, when I docker starts running, just around 4 hours it will be around 3 to 4 GB of RAM. Now since I applied the update, after around 8 hours, it's still under 200MB. SBOM indexing is enabled too. I hope this is not placebo on my end. But I really wish this issue gets fixed. I only run 1 container, to self host a ZeroTier controller, consuming almost 8GB of RAM with just that container is insane.
Good info to know. I've been watching too and it does seem like the memory usage is creeping back up again. I don't really know what else to do to make this stop and it just needs to be fixed by Docker. I have 32 GB ram in this PC, 2 GB is used for Home Assistant, 6 GB is the max for WSL2 and then Docker just takes up whatever it wants until it's almost maxed out. I do not want to get to the point where I feel like I need to restart the computer on a schedule just to clear it. It's actually crazy that this issue has been going on as long as it has, especially with all the reports.
Sadly, I think it still leaks even after the recent update. But compared to before, I think I live with it now. Almost 2 days up and running and it's now around 200MB, before it will be around 4GB just on 4 hours running.
Question about you needing to restart the pc to release the resources. I was doing that before since restarting Docker Desktop does not release com.docker.backend.exe. But what I found is that you have to manually quit Docker Desktop and run it again, that closes all processes related to it, including Docker Backend.
I even created a batch file to force kill Docker Desktop and Docker Backend then run it again. It's a bit inconvient for me to restart since I have dedicated game servers running also in the same machine. Maybe this will help you too, but not sure if it's only working for my use case.
I can confirm the same issue here. I set a limit of 6GB in the .wslconfig, but the Docker backend continues to use double that or more. Took this from Process Explorer. Might look at the backend with Process Monitor to see if there's anything, but yeah. This is definitely an issue. From process monitor. Just a whole lot of this: I went into Docker Desktop's settings, disabled the SBOM indexing and RAM usage dropped to less than 1.5 GB. This did include a restart of the engine but it appears to be holding steady. Will continue to monitor.
I've been monitor this thread for a few months now and I think the recent update (4.27.2) did something, but I think it's not fully fixed yet. Before I tried disabling SBOM indexing too, but has no effect, com.docker.backend.exe still acts as if it has memory leaks. Then the latest update got pushed and now it's still leaking, but it's way slower now. I'm not even sure if it's leaking at all since it's too low. Before, when I docker starts running, just around 4 hours it will be around 3 to 4 GB of RAM. Now since I applied the update, after around 8 hours, it's still under 200MB. SBOM indexing is enabled too. I hope this is not placebo on my end. But I really wish this issue gets fixed. I only run 1 container, to self host a ZeroTier controller, consuming almost 8GB of RAM with just that container is insane.
Good info to know. I've been watching too and it does seem like the memory usage is creeping back up again. I don't really know what else to do to make this stop and it just needs to be fixed by Docker. I have 32 GB ram in this PC, 2 GB is used for Home Assistant, 6 GB is the max for WSL2 and then Docker just takes up whatever it wants until it's almost maxed out. I do not want to get to the point where I feel like I need to restart the computer on a schedule just to clear it. It's actually crazy that this issue has been going on as long as it has, especially with all the reports.
Sadly, I think it still leaks even after the recent update. But compared to before, I think I live with it now. Almost 2 days up and running and it's now around 200MB, before it will be around 4GB just on 4 hours running.
Question about you needing to restart the pc to release the resources. I was doing that before since restarting Docker Desktop does not release com.docker.backend.exe. But what I found is that you have to manually quit Docker Desktop and run it again, that closes all processes related to it, including Docker Backend.
I even created a batch file to force kill Docker Desktop and Docker Backend then run it again. It's a bit inconvient for me to restart since I have dedicated game servers running also in the same machine. Maybe this will help you too, but not sure if it's only working for my use case.
My case wouldn't be so similar to yours. All of my stuff for my server runs through Docker so there wouldn't be anything affected by me just having it do a restart in the middle of the night. Everything would start back up on its own...or should. It's something I'd have to try out and see how it works. Ultimately, it'd be nice if this was just fixed. It was nice seeing my overall memory usage down to like 50%. But it's crept back up. I was at around 1.1 GB for the docker backend yesterday. Back up to about 5.5 GB right now.
Yeah this is kind of crazy... 1 day after a Docker reboot on a machine with 64GB. As you can see, Docker reports the containers are only using 2.46GB of RAM. SBOM indexing is disabled.
After shutting down Docker Desktop:
After wsl --shutdown
:
(vmmem is gone obv.)
I just added the following to .wslconfig
, based on some other suggestions on the Internet:
[wsl2]
memory=8GB
Starting WSL, Docker desktop and my 13 containers back up now looks like this: But it always has been quite low at the start, the issue is after about 24 - 48 hours.
I'll report back in 1 or 2 days on the usage of Docker Desktop Backend
(Docker version 4.27.1)
Seems like it's better for 4.27.2 (137060)
, so far I see the memory graph of the com.docker.backend.exe
is in a shark shape which is much better than before.
20 hours later, seems Docker Desktop Backend still creeping up, but at least Vmmem is kept below 8GB with the .wslconfig
change.
Going to try 4.27.2
as well now @raycheung and I'll report back in about a day again.
It's been running for 1.5 days and I can happily confirm that 4.27.2
resolves the RAM consumption issue!
@Nereg maybe you can confirm this as well, then the issue can be closed?
Running 4.27.2 now as well and will keep an eye on things to see if usage jumps over the next couple days. Will be real nice if this actually fixes it, though I didn't see any mention specifically about this in the changelog.
Here's 4 days of continuous running. This would before have been 6-8GB of usage, now 500MB. Not perfect, but infinitely better than it was.
And yeah no specific mention of it, but this version 100% fixes the major leak.
Yeah for sure. Checking mine, running about 160 MB right now after a day. Definitely would have been much higher by now. So glad to see this lol. This issue can probably be closed now unless someone still has the high memory usage.
Version 4.27.2 has indeed fixed this issue for me. It has been running for 3 days now, and the backend memory is only about 200MB
It's been running for 1.5 days and I can happily confirm that
4.27.2
resolves the RAM consumption issue!@Nereg maybe you can confirm this as well, then the issue can be closed?
Sorry for the long wait, I can now confirm too, that the issue was fixed! Version 4.28.0
, one container, 100 megs of RAM usage.
Before, it was much higher, even with one container.
2024, having this issue 🤔 . It will just slowly grow in memory usage till it consumes all available system memory. Regardless of WSL configuration.
Quitting Docker Desktop fixes it, but then I can't start it back up unless I restart my device. So I end u having to close out of all my workflows to reboot sometimes several times a day because of this.
2024, having this issue 🤔 . It will just slowly grow in memory usage till it consumes all available system memory. Regardless of WSL configuration.
Quitting Docker Desktop fixes it, but then I can't start it back up unless I restart my device. So I end u having to close out of all my workflows to reboot sometimes several times a day because of this.
Douglas, have you read the replies to this issue? It's not a WSL configuration issue. It's fixed in a certain version of docker desktop. You didn't mention what version you're running, or if it's newer than the version that's said to fix it.
Yeah you're right, I was a couple versions behind. Updated and will check back if it continues.
Yeah you're right, I was a couple versions behind. Updated and will check back if it continues.
You don't have to check back, it's been pretty clearly documented in the other comments, please give them a read.
Locking this issue since OP reports that it's been resolved. If anyone on this thread is still having similar symptoms after upgrading to 4.28, please open a new issue with diagnostics, as it's likely a different underlying problem. Thanks!
Description
So I noticed
com.docker.backend.exe
process consuming a lot of RAM (probably limited only by my 16 gigabytes of RAM). Here is the screenshot of it in the task manager. I have allocated 4 gigs of RAM to WSL VM with this.wslconfig
:The
sysctl.vm.swappiness
parameter was pulled from an article to try and fix the issue, Before the issue started only thevmmem.exe
process was consuming RAM in the limits of.wslconfig
. Nowcom.docker.backend.exe
consumes more then the VM! I reinstalled docker already and the problem isn't gone. And the RAM usage of the containers isn't high as seen in this screenshot.Reproduce
Expected behavior
Process
com.docker.backend.exe
shouldn't consume this much RAMdocker version
docker info
Diagnostics ID
AB1B2FD8-3925-4A38-9D87-3ACA7E9A1A0B/20230707141611
Additional Info
Windows: 22H2 build 19045.3155 WSL version:
List of WSL distros:
Output of
com.docker.diagnose.exe check
command:I already tried restating WSL as told but it won't fix the corruption (though I can't see any data corruption in my containers) issue