brikis98 / docker-osx-dev

A productive development environment with Docker on OS X
http://www.ybrikman.com/writing/2015/05/19/docker-osx-dev/
MIT License
1.43k stars 106 forks source link

Sync constantly running out of space #169

Open ain opened 8 years ago

ain commented 8 years ago

Steps to reproduce:

  1. Create a machine with following resources:

    docker-machine create --driver virtualbox --virtualbox-disk-size 25000 --virtualbox-cpu-count 2 --virtualbox-memory 2048 testmachine
  2. Start a machine
  3. Create docker-compose.yml
  4. Run Compose
  5. Run docker-osx-dev on a project with 10+ GB of data to sync

What happens: sync runs for a minute and fails with:

2016-02-10 16:41:51 [INFO] rsync: write failed on "/Users/ain/projects/testmachine/shared/docker/assets/global_images/image/file/582/xofoiapdodpoda132.png": No space left on device (28)
2016-02-10 16:41:51 [INFO] rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1]
2016-02-10 16:41:51 [INFO] rsync: [sender] write error: Broken pipe (32)
2016-02-10 16:41:56 [INFO] Initial sync done

What should happen: sync should complete successfully.

brikis98 commented 8 years ago

If you run docker ps -a and docker images, you'll get a list of all the containers and images on your system, all of which are also stored in VirtualBox and take up a lot of room. Between those and the sync data, it's certainly possible you'll be out of space. The only workaround I can think of for now is to either clean up the files or increase the VirtualBox disk size.

ain commented 8 years ago

I think the problem is that the copy function works with tmpfs (because of tar?) and therefore throws up when the space there gets consumed.

To back this speculation up, take a look:

tmpfs                     1.8G      1.8G         0 100% /
tmpfs                  1001.3M    956.0K   1000.3M   0% /dev/shm
/dev/sda1                23.0G      4.4G     17.4G  20% /mnt/sda1
cgroup                 1001.3M         0   1001.3M   0% /sys/fs/cgroup
/dev/sda1                23.0G      4.4G     17.4G  20% /mnt/sda1/var/lib/docker/aufs
none                     23.0G      4.4G     17.4G  20% /mnt/sda1/var/lib/docker/aufs/mnt/0dde036b38ed818d3e439c21290fe6153d2f0bcba5888f003a721448ca8cfdb8
shm                      64.0M         0     64.0M   0% /mnt/sda1/var/lib/docker/containers/56cd6d371e51fe2ead65646df783c80c90e2375d7ecf872a1cbea2301a597335/shm
none                     23.0G      4.4G     17.4G  20% /mnt/sda1/var/lib/docker/aufs/mnt/604e0b2425832218c3178441f90d3c926ca722c54e47e5e7f4b1a51a1afdbcc0
shm                      64.0M      4.0K     64.0M   0% /mnt/sda1/var/lib/docker/containers/2b4bd577c0c5f53c5e5e7975fc41ef4156eefd2f990e222be590cc493c3fc84a/shm
none                     23.0G      4.4G     17.4G  20% /mnt/sda1/var/lib/docker/aufs/mnt/6a89248377d8a1527d9a8410444882b39e2dd881a22602cfeeceb809748f4f3c
shm                      64.0M         0     64.0M   0% /mnt/sda1/var/lib/docker/containers/51c4fa4fdc8e637f08387d5f90ed87657cc571f8a552d47e53ad795e7204aa68/shm
brikis98 commented 8 years ago

Hm, that could be. Not sure how to change that with tar either. Did you not have the same issue with this same folder before tar was introduced and we were using rsync for the initial sync?

ain commented 8 years ago

Can't tell. I kept these gigabytes of assets inside another container earlier, didn't have a problem.

We should track down the commit before tar merge and test against that one to clarify.

ain commented 8 years ago

Reproduced. It's a tar problem, was running without tar last week, all fine, upgraded on Friday and today after rebooting the whole machine, I get this:

2016-02-29 09:45:43 [INFO] Initial sync using tar for /Users/ain/projects/…/frontend
tar: write error: No space left on device
exit status 1

and

$ dockersize
Filesystem                Size      Used Available Use% Mounted on
tmpfs                     1.8G      1.8G     72.0K 100% /
tmpfs                  1001.3M         0   1001.3M   0% /dev/shm
/dev/sda1                27.8G     19.8G      6.5G  75% /mnt/sda1
cgroup                 1001.3M         0   1001.3M   0% /sys/fs/cgroup
/dev/sda1                27.8G     19.8G      6.5G  75% /mnt/sda1/var/lib/docker/aufs

The thing is, tar works with tmpfs which on my Machine instance is 2048. That 72K there is what remained. We can't have tar working in memory here.

ain commented 8 years ago

I was able to circumvent the problem by applying a better .dockerignore.

snipebin commented 8 years ago

I was having the same problem. Constantly getting:

2016-03-16 00:46:09 [INFO] Initial sync using tar for /Users/... tar: write error: No space left on device exit status 1

while running docker-osx-dev even after recreating the docker machine instance.

I was finally able to finish the initial sync and move on to docker-compose after upsizing the memory available to the Docker machine instance from 3GB to 4GB.

I suspect that in fact, tar works with tmpfs and it is limited by memory allocated to the docker machine.

ain commented 8 years ago

Yup, that is really the case here.

wheelercreek commented 8 years ago

I'm getting this same problem.. but from what's written above, still not sure how to fix. How do I "increase the memory available to the docker machine instance"? Or "apply a better .dockerignore"?

wheelercreek commented 8 years ago

I was able to find an answer that worked for me..
boot2docker stop VBoxManage modifyvm boot2docker-vm --memory 3500 boot2docker start

See http://stackoverflow.com/questions/24422123/change-boot2docker-memory-assignment