Closed h0wXD closed 2 years ago
Thanks for the report. I've never seen that before. Is it repeatable or was it a one-off?
It was a one-off, and I have been unable to repro
OK, thanks. I'm not sure how much we're going to be able to do about it in that case. Please do let us know if happens again, and if it does please try and capture diagnostics.
@stephen-turner It happened also for me a few times, I only had small postgres db running. It started to happening after the last update I believe.
How can I help and how can I provide the diagnostics info? 😉
Thank you. From the whale menu, Troubleshoot-Support.
Hello , i have the same behaviour, i have the same Windows version Docker version and also using integration with WSL2
@juanvoxelcare Do you have a diagnostics id you can send us?
@stephen-turner It happens for me when I run postgres or when I run localstack, so it doesn't look like the image issue.
Here is my today's diagnosis id: F80A00F3-84C6-46DE-AACC-ACDCDBB64241/20210223172959
Same issue. I just had a complete desktop crash. Docker Desktop was using 92GB of ram and took down my system.
Woah, same just happened here. Some random program crashed in the middle of doing some work and one monitor went black. So i launched process list to see if there is anything suspicious going on and saw docker desktop using over 35GB of RAM. Didn't manage to kill it and couldn't run diagnostics. Everything froze soon after and i had to reboot.
Some additional info: Docker was not running anything crazy - apache, php and mysql
Edit: RE @jord1e's comment: Nope, no kubernetes
Hello, yesterday my computer froze with 100% CPU usage and 99% memory usage because of Vmmem, this issue seems to be linked.
Currently I am using docker-desktop again and now my memory usage is again very high whilst i have nothing running besides the Kubernetes integration (my computer hasn't even been on for one hour). I dit test a deployment today, more details below.
This was yesterday (after closing my IDE (WebStorm) and Firefox!). Please note that the CPU-usage was also at 100%, all containers had been stopped (except for the standard Kubernetes ones) and I needed to force close the docker process: And this is today, after one hour:
The issue seems to be related to #8521 because i am testing a Kubernetes deployment which starts node:14-alpine
containers.
Diagnostics aren't working for some reason.
I am running docker-desktop version 3.2.2
(build no. 61853
) which consists of docker engine v20.10.5
with Kubernetes enabled (v1.19.7
) and i am using the the WSL2 backend. Windows version 20H2 17-7-2020
, build 19042.867
.
Maybe @MichalLytek, @ragepaw-git and/or @koopa can also confirm that they where running Alpine containers (directly or indirectly via Kubernetes)?
These are my containers (deployed by Kubernetes by default + the Kubernetes dashboard), nothing out of the ordinary so the memory should be reclaimed (right?):
Please note that at startup my memory usage is 4000 MB, this includes Debian WSL (which consumes 400MB).
When i stop docker-desktop some of the memory stays:
And when i run wsl --shutdown
everything is freed
No Kubernetes for me.
I have one docker container right now, and it is built on Alpine.
The same happened to me, I had no containers running! Docker Desktop.exe used 48Gb of ram before killing my workstation. From what I experienced, having the docker desktop window open on a stopped minio container was enough for it to balloon in mem usage. Maybe a problem with electron?
Had the same problem, couldn't produce an diagnostic afterwards.
Same here, docker-desktop.exe, v3.2.2 (61853) consumes all memory (physical and virtual) until all processes running on the system crash with out-of-memory errors. This is bad.
Just had this happen to me for the second time. v3.2.2 (61853). Luckily I noticed apps getting unnecessarily sluggish, so managed to catch the runaway memory before everything became unresponsive, but I had to exit Docker Desktop from the Whale icon in order to shut it down.
Specifically with today's event, I had a container running overnight and it looked like my machine had tried to wake-up from sleep. If memory serves, I think the last time this happened it was also after Windows waking up, so there's perhaps some argument going on with Windows' natural sleep routine.
I doubt it's related to waking from sleep because my system never sleeps and it's happened multiple times to me.
On Mon, Apr 12, 2021 at 12:44 AM Matt Hillier @.***> wrote:
Just had this happen to me for the second time. v3.2.2 (61853). Luckily I noticed apps getting unnecessarily sluggish, so managed to catch the runaway memory before everything became unresponsive, but I had to exit Docker Desktop from the Whale icon in order to shut it down.
Specifically with today's event, I had a container running overnight and it looked like my machine had tried to wake-up from sleep. If memory serves, I think the last time this happened it was also after Windows waking up, so there's perhaps some argument going on with Windows' natural sleep routine.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker/for-win/issues/10179#issuecomment-817473820, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKKFJQPXQQPQOALOI66DTWTTIJ3D5ANCNFSM4WNZTQVQ .
Would like to chime in: just had docker desktop use 7GB RAM. I did switch Windows users and I 'think' I tried to accidently run the docker exe again there (but it didn't appear), maybe that is a hint on what causes it?
edit: by 'switch user' I mean kept 1 windows user signed in, while switching to another windows user, then later I logged that user out and switched back to the user that had the docker desktop exe process with 7GB+ ram usage.
I ran a lot of docker build commands and my system suddenly stopped working all of my 64 GB RAM was used up. I restarted docker and memory usage went down to around 20 GB.
Seems like docker consumes 500 MB - 1GB everytime I build an image and does not free up any memory afterwards.
Edit: Using docker desktop 3.2.2 with engine 20.10.5
I was fine on 3.2.2 but just upgraded to 3.3.1 on two completely different systems and am now seeing this on both. They will start fine and be fine for a while. Then you go back and check and, at some point DockerDesktop.exe
(NOT vmmem
) is using 20-30GB of RAM. This is taking down entire systems.
Can confirm happened to me also - Docker Engine version v20.10.5, Docker Desktop v3.2.2, WSL2
Was watching Youtube in Chrome and using Discord, no containers running. Suddenly video froze, and Docker Desktop is increasing by 1gb per 5-10 seconds until we got to 95% memory usage (16gb system ram). Had to terminate via task manager.
Windows 10 Pro 21H1
I have consistently seen this same issue after an undetermined amount of time. I have wslconfig memory set and no excessive memory usage with running containers. I just usually notice some sluggishness then realize I need to kill Docker Desktop.exe as it is hogging 90+% memory.
Diagnostics ID: 906778E3-91E4-4C9B-93D3-6D6DB11F75A3/20210510022515 At this point, I had already wsl -t every running instance to try and nail down the culprit. The last containers running were:
All containers were running under 500mb of memory and no significant processing going on. Majority of my development with docker comes from the VS 2019 integration. Otherwise, I typically use compose to keep most of my containers running.
I ran a lot of docker build commands and my system suddenly stopped working all of my 64 GB RAM was used up. I restarted docker and memory usage went down to around 20 GB.
Seems like docker consumes 500 MB - 1GB everytime I build an image and does not free up any memory afterwards.
Edit: Using docker desktop 3.2.2 with engine 20.10.5
I also meet this issues. I using docker desktop 3.3.3, engine 20.10.6. I use wsl2 and the memory at Vmmem not free up after finished the docker build.
I experienced the same issue with Docker for Windows + WSL2.
'Fixed' it by limiting amount of RAM used:
Create file .wslconfig inside: C:\Users\
[wsl2]
memory=4GB # Limits VM memory in WSL 2 to 4 GB
processors=2 # Makes the WSL 2 VM use two virtual processors
I experienced the same issue with Docker for Windows + WSL2. 'Fixed' it by limiting amount of RAM used: Create file .wslconfig inside: C:\Users
with: [wsl2] memory=4GB # Limits VM memory in WSL 2 to 4 GB processors=2 # Makes the WSL 2 VM use two virtual processors
I think that's another problem. The leak is with the actual Docker desktop app, not WSL as I also limit the used RAM but also get the leak from time to time
I experienced the same issue with Docker for Windows + WSL2. 'Fixed' it by limiting amount of RAM used: Create file .wslconfig inside: C:\Users
with: [wsl2] memory=4GB # Limits VM memory in WSL 2 to 4 GB processors=2 # Makes the WSL 2 VM use two virtual processors
Thank you for this, at least my computer won't freeze anymore.
This doesn't resolve the leak though. When you shut down Docker Desktop more then of 2 GB Vmmem is not released (and the Vmmem keeps increasing the longer Docker Desktop is running):
Thanks for the report. I've never seen that before. Is it repeatable or was it a one-off?
@stephen-turner it has been 5 months and this issue now has 19 participants. Can this be looked into again?
We can look at it if and only if someone has a reliable repro case. 19 participants in five months means that it's obviously very rare (we have several million users daily), and none of us have seen it internally.
To add extra info - the machine I had the issue on is a
Ryzen 5 3600 16GB 3200mhz ram Lots of hard disks... Geforce RTX 2060
Machine is used for Dev/general use/gaming, and is hibernated/put into standby rather than shut down.
If I do get this issue again, is there anything you need me to do to gather more info?
Steps to reproduce is what we're lacking.
In my case, steps to reproduce are,
use Docker Desktop for Windows (number of containers irrelevant) wait until PC runs out of memory (in my case 3 weeks to a month)
On Tue, Jun 22, 2021 at 7:16 AM Stephen Turner @.***> wrote:
Steps to reproduce is what we're lacking.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker/for-win/issues/10179#issuecomment-865895296, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKKFJQJC3AESVV52DW4KXUTTUBWKXANCNFSM4WNZTQVQ .
19 participants in five months means that it's obviously very rare
I wonder how many participants are having this problem and don't notice it. I've seen it on TWO computers in my one location -- the only two that use Docker Desktop. They are different systems with different configurations. The only real commonality is updated Windows 10 and Docker Desktop. Wish I could tell you what causes it, but you only notice it when you happen to notice it.
In my case, I know that because of the amount of memory I have, it takes weeks to exhibit the issue. Rebooting sets the clock back to zero, and I have been rebooting a lot lately (for unrelated reasons).
On Tue, Jun 22, 2021 at 11:33 AM robross0606 @.***> wrote:
19 participants in five months means that it's obviously very rare
I wonder how many participants are having this problem and don't notice it. I've seen it on TWO computers in my one location -- the only two that use Docker Desktop. They are different systems with different configurations. The only real commonality is updated Windows 10 and Docker Desktop. Wish I could tell you what causes it, but you only notice it when you happen to notice it.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker/for-win/issues/10179#issuecomment-866089405, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKKFJQMFII3YVRNKZCK43D3TUCULNANCNFSM4WNZTQVQ .
I ran a lot of docker build commands and my system suddenly stopped working all of my 64 GB RAM was used up. I restarted docker and memory usage went down to around 20 GB.
Seems like docker consumes 500 MB - 1GB everytime I build an image and does not free up any memory afterwards.
Edit: Using docker desktop 3.2.2 with engine 20.10.5
I only just noticed I was possibly experiencing this, so can only confirm seeing once, but rebuilding an application brought my Windows 10 with its 16 GB to a crawl after Vmmem's RAM usage jumped from its usual ~5 GB to > 11 GB, with Docker Desktop 3.5.1.
I just experienced the same leak once again, memory usage reaching 99% and causing me to struggle in calls, until I shut down Docker. After restarting, I'm back to a reasonable 62%.
This happened at the very next rebuild of the same application, so I would expect this to be easily reproducible. By the way, I have no clue why Docker rebuilt all the services from scratch. I am still on 3.5.1. This is not frequent but it's not the first time it happens.
This happened to me, when I afked for about 2 hours while running few containers with docker compose and opening CLI from Docker Desktop for Win. My pc has 32GB of RAM JFYI
I was running multiple docker containers but the ram resources doesn't seem to be freed for any of them. After stopping all containers, and wsl2 idling:
Diagnosis ID: C2BE2537-BB64-4613-AC94-623DF368DF14/20210915172104
This happened before with up to around 40 GB of memory in my 64GB memory system.
All Containers stopped while Vmmem reserves most RAM (memory):
I found this problem when I used Docker Destop Version 3.3. Now, it's 4.0.1, and this problem still happens to me.
@stephen-turner I think the situation when docker causes high memory usage of Vmmem is running the "docker build" command. Besides, building the larger packages cause higher memory leak. (Ex: building packages with node_modules.) (I made a video trying to reproduce this. Youtube: https://www.youtube.com/watch?v=hNUUTzvEkCk ) p.s. In this video, the command "make build-and-tag" is the same as
docker build -t ${NAME}:${VERSION} .
docker tag ${NAME}:${VERSION} ${NAME}:latest
The Dockerfile for running the build command is just a normal multi-stage build script for Golang.
FROM golang:1.16.5-alpine as build
WORKDIR /server
COPY ./ /server
RUN apk add --no-cache git
RUN go get github.com/lib/pq
RUN go get -d -v ./...
RUN CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags "-static"' -o /server/app /server/cmd/server/main.go
FROM alpine:latest
WORKDIR /server
COPY --from=build /server /server
ENTRYPOINT ["/server/app"]
This not only happens when building Go code but also happens when building Node or others.
For now, if I encounter this problem, I will type the following command in PowerShell(admin) to restart the WSL service.
Restart-Service LxssManager
After that, the WSL will restart, the Docker engine will be shutdown and the memory will be free.
Hope that I provided enough information for contributors and a useful temporary solution for windows users.
OS: Windows 10 21H1 Docker Backend: Windows WSL 2 Docker Desktop: Version 4.0.1 Docker Engine: v20.10.8
@nucktwillieren No, this is not docker desktop for windows problem. this is https://github.com/microsoft/WSL/issues/4166 . which is problem that never returns heap allocated memory on wsl2 for win
@tomocrafter Thank you for your reply and your time. And, sorry for that I misunderstood the purpose and the needs of this issue. I saw some replies were about the problem of Vmmem, so I thought I could provide further information about Vmmem.
@nucktwillieren Oh, I replied to you because I got notification of your reply so didn't notice that so many people are misunderstanding this issue. I also hope https://github.com/microsoft/WSL/issues/4166 will be fixed asap
Issues go stale after 90 days of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
comment.
Stale issues will be closed after an additional 30 days of inactivity.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle stale
/remove-lifecycle stale
I have consistently seen this same issue after an undetermined amount of time. I have wslconfig memory set and no excessive memory usage with running containers. I just usually notice some sluggishness then realize I need to kill Docker Desktop.exe as it is hogging 90+% memory.
Diagnostics ID: 906778E3-91E4-4C9B-93D3-6D6DB11F75A3/20210510022515 At this point, I had already wsl -t every running instance to try and nail down the culprit. The last containers running were:
- Postgres: bitnami/postgresql:11-debian-10
- PgAdmin: dpage/pgadmin4
- Valheim server: lloesche/valheim-server
All containers were running under 500mb of memory and no significant processing going on. Majority of my development with docker comes from the VS 2019 integration. Otherwise, I typically use compose to keep most of my containers running.
The problem is PGAdmin4!!!! It happens outside of docker as well. It only happens after you open the app.
I dont have PGAdmin4 installed anywhere
On Tue, 11 Jan 2022, 01:12 Eddie Ayling, @.***> wrote:
I have consistently seen this same issue after an undetermined amount of time. I have wslconfig memory set and no excessive memory usage with running containers. I just usually notice some sluggishness then realize I need to kill Docker Desktop.exe as it is hogging 90+% memory.
Diagnostics ID: 906778E3-91E4-4C9B-93D3-6D6DB11F75A3/20210510022515 At this point, I had already wsl -t every running instance to try and nail down the culprit. The last containers running were:
- Postgres: bitnami/postgresql:11-debian-10
- PgAdmin: dpage/pgadmin4
- Valheim server: lloesche/valheim-server
All containers were running under 500mb of memory and no significant processing going on. Majority of my development with docker comes from the VS 2019 integration. Otherwise, I typically use compose to keep most of my containers running.
The problem is PGAdmin4!!!! It happens outside of docker as well. It only happens after you open the app.
— Reply to this email directly, view it on GitHub https://github.com/docker/for-win/issues/10179#issuecomment-1009437117, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABE5F4AM7UPC5ETJEW3CXETUVNRU5ANCNFSM4WNZTQVQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you commented.Message ID: @.***>
In combination with WSL2 it is free virtual memory that goes to zero, even though there are 5GB of physical memory free. A hard reboot needed to resolve.
Issues go stale after 90 days of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
comment.
Stale issues will be closed after an additional 30 days of inactivity.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle stale
Closed issues are locked after 30 days of inactivity. This helps our team focus on active issues.
If you have found a problem that seems similar to this, please open a new issue.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle locked
Actual behavior
Docker desktop 3.1.0, which I just updated to yesterday, crashed my entire workstation, eating up all available memory and causing applications to crash. I have 'Send usage statistics' enabled but had to manually terminate the process by task manager.
Expected behavior
No memory leak
Information
Steps to reproduce the behavior
I don't know how to reproduce, but I only did few things: right click the tray, open settings right click the tray, dashboard, scrolled down to a docker-compose of redis-master/redis-slave, saw the console, and closed the window again.