docker / for-win

Bug reports for Docker Desktop for Windows
https://www.docker.com/products/docker#/windows
1.85k stars 287 forks source link

No space left on device #1042

Closed gaetancollaud closed 3 years ago

gaetancollaud commented 7 years ago

Expected behavior

Be able to use docker for windows more than one week after the installation.

Actual behavior

I cannot start/build anything since I always have the "No space left on device" message. It seems like my MobiLinux disk is full (60/60 gig used)

Information

I have already run docker system prune -all and I tried the commands in #600. Now I have no image and no container left. But I still cannot do anything.

Steps to reproduce the behavior

Use docker on windows and build a lot of images. It tooks me less than a week since I installed docker for Windows. My images can be heavy: between 500mb and 2gb and I built a lot of them the last week (maybe 50 to 100). This could match the 60go of the MobiLinux VM.

mikeparker commented 4 years ago

@cjancsar great detail, thanks.

It looks like your up/down loop is creating 7GB of volume data every time instead of sharing the same volume for each loop. This is likely because you're using an anonymous volume instead of naming it so its recreated every time and never cleaned up.

I agree it seems completely pointless us keeping the anonymous volume around if its impossible to access it again. I have raised this with the docker-compose team so I will keep you updated as to the response.

Options:

  1. Name the volume and reuse the same volume every time you do up/down
  2. Remove the volume manually (docker-compose rm .. -v, see https://docs.docker.com/compose/reference/rm/)

Even if we raised the inode count you'd still hit the disk space limit before long so it's not really a fix. If this fixes your issue we need to think about how to provide tools to prevent other users hitting this or making it more clear whats going on and what the problem is.

I suppose in a good way its nice to know this isnt a docker bug, but the UI definitely needs some work, or maybe something in docker-compose we can change.

mikeparker commented 4 years ago

@cjancsar I spoke to the docker-compose team and they said that: a) anonymous volumes are reused if you do docker-compose up again without running docker-compose down. b) You can use docker-compose down -v to remove the volumes

So option 3 is to stop using down and simply re-run up. Option 4 is to add -v to your down command.

cjancsar commented 4 years ago

@mikeparker yeah, I would recommend maybe throwing a different error when inode count is hit versus storage space being hit if that is possible--it would at least create a future separation in the similar issues (between actual disk storage space vs inode limit being hit). If I knew enough about docker internals I would try for a PR, but alas... I do not. I do however see some tests on the docker engine around the concept of the No space left on device error.

I think the main reason we do the up / down cycle is so we can rebuild the dependencies that are on the docker container (when adding a new dependency or when switching branches). Since our node_modules are not shared between host and container (windows to linux), the only way to 'refresh' the dependencies (hit the install stage of the Dockerfile) is to bring the container down and rebuild it. If we just do an up again, the re-install stage is not done--we can manually connect to each container and manually re-install dependencies however but generally have just tried to use docker-compose cli to manage the system. I think this is just something wrong we are doing with our workflow.

I guess for now, we will continue to see why our inode counts are so high and see if we can do anything to reduce it, as this issue also apparently bleeds into our published images (so if we pull images they also have extremely high inode counts).

@mikeparker thank you for your help, I guess we will continue to try and manually clean our volumes until we can find root cause of why our inode count is high.

@minusdavid

Btw as an aside @cjancsar you can re-size the disk on your Docker host. As I noted above, you can use the Advanced settings in the GUI to increase the disk size. So that "Maximum Disk Size" is actually configurable. You could throw more of your 2TB disk at it.

Unfortunately, changing the disk size does not seem to effect the inode count, we had already tried this. It seems to be a hard-coded limit.

mikeparker commented 4 years ago

@cjancsar the reason your inode count is so high is likely twofold:

a) Each time you docker-compose up you end up with 7GB of volume data, probably with thousands of files b) You are recreating this volume from scratch every time and not deleting the old one or reusing it.

Ultimately solving (b) is more important than (a), I personally wouldnt spend time on (a) because (b) will solve your immediate problem and the solutions are quick and simple. There are two basic routes to do this:

b1) Reuse the volume from the previous docker-compose up, by naming your volume in your compose file b2) If you need 7GB of fresh data every time, delete the old one. Simply add -v to the docker-compose down command.

Both of these solutions seem fairly straightforward so I'd be interested if these don't solve the problem.

cjancsar commented 4 years ago

Thanks @mikeparker we will trial those suggestions out and monitor how it effects performance! I will keep detailed notes in case things don't work out.

mikeparker commented 4 years ago

For reference / clarity, if you want to dig around in the VM to find out where the inodes and space is used:

  1. Open a terminal inside the linux vm: docker run -it --privileged --pid=host justincormack/nsenter1
  2. Use df -i <path> to see the inode count or df -i to see overall (df = disk filesystem, i = inodes)
  3. Use du -hs <path> to see the disk usage (du = disk usage, -hs = human readable, summary)
andreasdim commented 4 years ago

Hello,

What helped me was the following: Edit powershell Script under C:\Program Files\Docker\Docker\resources\MobyLinux.ps1: Line 86: Change $global:VhdSize = 60*1024*1024*1024 # 60GB to whatever you want. In my case: $global:VhdSize = 12'*1024*1024*1024 # 120GB

Then reset docker to factory defaults image

For me that worked: image

mikeparker commented 4 years ago

@andreasdim You can change the VM hard drive size in the settings UI (see https://github.com/docker/for-win/issues/1042#issuecomment-364548229), you dont need to edit powershell (unless you are resetting to factory defaults a lot and want to change the factory default, in which case there is a wider problem!).

andreasdim commented 4 years ago

@mikeparker Thank you, but I cannot see that option. I'm running docker on Version image

mikeparker commented 4 years ago

@cjancsar any luck?

tophers42 commented 4 years ago

@mikeparker I'm also having the problem some other users have described. I've allocated 300Gb to the disk image, but when I look at the docker desktop vm, only ~60gb is allocated (even after restart). Our problem is that we actually need to unpack more than 60gb of data inside our build (true this could be done with a mount at run time, but this is the current setup I have to work with). So it's not an issue of just cleaning up old volumes/containers.

Allocated 300Gb

image

VM has a max of 60gb

image

Workaround without factory reset

For others, I've found a workaround that doesn't require a factory reset by manually updating the image size in Hyper-V Manager. I'm not sure if this change will persist through updates though.

image image

Docker version: image

mat007 commented 4 years ago

@tophers42 there is a bug at the moment in Docker Desktop which makes it not resize the VM disk. To work around this you need to resize it manually, e.g. from an admin powershell:

Resize-VHD -Path 'C:\ProgramData\DockerDesktop\vm-data\DockerDesktop.vhdx' -SizeBytes 300gb

Sorry for the inconvenience.

tophers42 commented 4 years ago

Thanks @mat007, I also noticed it's marked as a known issue in the latest release notes. Is this the issue to track? https://github.com/docker/for-win/issues/4725

Xortrox commented 4 years ago

Seems I still experience this issue myself (v19.03.5 however), but it seems that manually setting my disk to 200GB in hyper-v, then setting it to 200GB in docker for windows AND changing my RAM from 8-4 GB, docker just accepts the new size. I was even able to change it back from 8GB RAM to 4GB and the disk still remained 200GB.

Seems like maybe the disk slider itself is just bugging out?

AnushaErrabelli commented 4 years ago

Hello team, did you find a solution for it?, I'm also experiencing the same issue in CI when running docker so it's failing every build!! it says no space left on device

tnodet commented 4 years ago

Don't set both graph parameter in JSON configuration file & Disk image location in Settings

Windows 10, 1809, 17763.1158 - Docker 2.3.0.2 (45183) - Linux containers

I had the "No space left on device" error for a totally different reason and wanted to share the solution to my specific problem.

I wanted to change the location of my docker images, and ended up setting both:

Then, when trying to pull images through docker-compose, I kept having errors: ERROR: for image_name write /D/path/to/docker/images/tmp/GetImageBlob<uid>: no space left on device.

After removing the "graph" parameter (keeping only the Disk image location), I could pull images normally.

I was confused by the fact that on Windows, when using Linux containers, images are not actually on the Windows file-system but in the file-system of the *.vhdx Hard Disk Image File of the Mobby Virtual Machine. But apparently setting both parameters provokes a weird behavior in Docker.

byt3pool commented 4 years ago

Regardless of the fact that this might be an issue of docker itself:

A docker system prune -a followed by docker volume prune did the trick for me. At least for now. (as mentioned by @cjancsar on 23 Jan)

Docker reclaimed nearly 55 GB.

docker-robott commented 4 years ago

Issues go stale after 90 days of inactivity. Mark the issue as fresh with /remove-lifecycle stale comment. Stale issues will be closed after an additional 30 days of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle stale

kwabena53 commented 3 years ago

Expected behavior

Be able to use docker for windows more than one week after the installation.

Actual behavior

I cannot start/build anything since I always have the "No space left on device" message. It seems like my MobiLinux disk is full (60/60 gig used)

Information

  • Diagnostic ID 0DCAC250-7F6C-4EB3-AA58-BD33AD062218/2017-08-25_17-18-28
  • I have the same problem as #600 but this ticket was closed without any solution so I reopen a new one
  • I can see the MobiLinux virtual machine in the Hyper-V Manager, but no way to connect to it. It's like a black box. If someone has some insights on how I can gather more information about it would be helpful.

I have already run docker system prune -all and I tried the commands in #600. Now I have no image and no container left. But I still cannot do anything.

Steps to reproduce the behavior

Use docker on windows and build a lot of images. It tooks me less than a week since I installed docker for Windows. My images can be heavy: between 500mb and 2gb and I built a lot of them the last week (maybe 50 to 100). This could match the 60go of the MobiLinux VM.

This helped me. On Ubuntu 18.04 docker system prune --all --force

kwabena53 commented 3 years ago

Run docker system df to see what is taking up space on your docker

Then run:

docker system prune --all --force to remove all hidden and unused containers. docker system prune does not remove all unused containers.

Bill0412 commented 3 years ago

@NdubisiOnuora This has been added to the edge channel. We are also working on some improvements to automatically reclaim space, but I don't have an ETA for those.

image

The error for me is

d532e87af17e: Loading layer  18.73GB/19.53GB
Error processing tar file(exit status 1): write /swapfile: no space left on device

Your solution works for me, thanks!

I increased the disk image size limit from 68GB to 144GB, memory from 2GB to 8GB and swap from 1GB to 4GB.

image

docker-robott commented 3 years ago

Closed issues are locked after 30 days of inactivity. This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle locked