Open iamKurt opened 8 years ago
Unfortunate! I can't dupe, I also don't see that none
filesystem. Is this with boot2docker 1.11.0-rc3 ? Does it happen every time you try a create, or only sporadically?
Can you provide the container run command so I can try that?
Thanks @nathanleclaire! Yes, this has happened before. Not sure that I've really tried to track it down previously.
The container run command is:
docker run -it f668f117c828 bash
Inside the container, I tried to start Mongo just to see what the error message was:
mongod --dbpath /var/lib/mongodb --smallfiles
Noticed this:
ERROR: Insufficient free space for journal files
Please make at least 422MB available in /var/lib/mongo/journal or use --smallfiles
I guess this makes me wonder -- if I create many machine VMs, would they share the same mounted volumes from previously created ones?
Thought I was making progress at cleaning this up by running commands to clean out old volumes, images and containers:
docker rm -v $(docker ps -q -f status=exited)
docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
docker volume rm $(docker volume ls --filter dangling=true -q)
But running df -h
in the container still results in pretty high usage:
[root@f457f6a7a989 /]# df -h
Filesystem Size Used Avail Use% Mounted on
none 4.8G 4.4G 118M 98% /
tmpfs 4.4G 0 4.4G 0% /dev
tmpfs 4.4G 0 4.4G 0% /sys/fs/cgroup
/dev/sda1 4.8G 4.4G 118M 98% /etc/hosts
shm 64M 0 64M 0% /dev/shm
Here's the docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 25
Server Version: 1.10.3
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 81
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.1.19-boot2docker
Operating System: Boot2Docker 1.10.3 (TCL 6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 8.763 GiB
....
Debug mode (server): true
File Descriptors: 13
Goroutines: 27
System Time: 2016-04-07T14:34:21.647991057Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Labels:
provider=virtualbox
Any thoughts on this @nathanleclaire? I have another VM running into this situation. docker info
on this VM is:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 22
Server Version: 1.10.3
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 106
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: host bridge null
Kernel Version: 4.1.19-boot2docker
Operating System: Boot2Docker 1.10.3 (TCL 6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.79 GiB
Debug mode (server): true
File Descriptors: 10
Goroutines: 22
System Time: 2016-04-12T15:54:52.827819084Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Labels:
provider=virtualbox
And, the file system is full:
Filesystem Size Used Available Use% Mounted on
tmpfs 7.0G 129.4M 6.9G 2% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/sda1 6.8G 6.8G 0 100% /mnt/sda1
cgroup 3.9G 0 3.9G 0% /sys/fs/cgroup
Users 464.8G 126.8G 338.0G 27% /Users
/dev/sda1 6.8G 6.8G 0 100% /mnt/sda1/var/lib/docker/aufs
And manually removing images seems to be failing now also:
docker rmi e51da98f1264
Failed to remove image (e51da98f1264): Error response from daemon: write /mnt/sda1/var/lib/docker/image/aufs/repositories.json.tmp: no space left on device
What's the result of sudo umount /Users && sudo du -a / | sort -n -r | head -n 10
on your VM? Just curious to see which directories are using the most space.
(Be aware that this will un-mount the /Users
shared folder until you reboot the VM).
Here are the results:
7250664 /
7118112 /mnt/sda1
7118112 /mnt
7118076 /mnt/sda1/var
7118072 /mnt/sda1/var/lib
7113580 /mnt/sda1/var/lib/docker
4454944 /mnt/sda1/var/lib/docker/aufs
4454068 /mnt/sda1/var/lib/docker/aufs/diff
2649716 /mnt/sda1/var/lib/docker/tmp
2649712 /mnt/sda1/var/lib/docker/tmp/docker-builder809663928
@formerlyKurt Do you maybe have a lot of orphaned images? Does docker rmi $(docker images --filter=dangling=true -q)
help?
Weirdly, this doesn't do anything for me:
docker rmi $(docker images --filter=dangling=true -q)
docker: "rmi" requires a minimum of 1 argument.
See 'docker rmi --help'.
Usage: docker rmi [OPTIONS] IMAGE [IMAGE...]
Remove one or more images
Everything seems to have a tag when I run docker images
Yeah, it will kick back that message if docker images
doesn't return any dangling, so that doesn't seem to be the issue.
Also seeing something similar on Linux box. Any ideas about how to recover @nathanleclaire? Would suck to have to scorch Jenkins?
@formerlyKurt I'm not sure. What seems to be using all the space?
Any chance you can post a way to dupe, using say the amazonec2
driver?
Not sure I can duplicate it. I guess, I've been "lucky" with this one.
Seems to be lots of hashed stuff in the /var/lib/docker/aufs
directory. Here are the contents...though I'm not sure this is helpful:
docker@default:/mnt/sda1/var/lib/docker$ sudo ls -la aufs
total 68
drwx------ 5 root root 4096 Apr 5 17:27 .
drwx-----x 9 root root 4096 Apr 5 17:27 ..
drwx------ 108 root root 20480 Apr 11 18:02 diff
drwx------ 2 root root 20480 Apr 11 18:02 layers
drwx------ 108 root root 20480 Apr 11 18:02 mnt
(Truncated...)
docker@default:/mnt/sda1/var/lib/docker$ sudo ls -la aufs/layers
total 432
drwx------ 2 root root 20480 Apr 11 18:02 .
drwx------ 5 root root 4096 Apr 5 17:27 ..
-rw-r--r-- 1 root root 195 Apr 8 16:50 01cf616ffa455b0dfdc2a4e0b9d188982f9881278b0c6cd6411dc6f5590c7778
-rw-r--r-- 1 root root 65 Apr 5 17:31 0201f48d9936e631f4449f4af4b87c29e8895c254aeeed811a6a8947b0859915
-rw-r--r-- 1 root root 585 Apr 5 17:41 030da5ade6a413591295403283ae726a5a969492f4901a06c9dcfb4f03142136
-rw-r--r-- 1 root root 130 Apr 6 19:24 0d7b721fafc5cf3aebc3672496c...
With regards to the other box, I think we need to be diligent about cleaning up -- looks like we were just dropping images there and not cleaning up.
For dev box, I've created a much larger VM in hopes that I will not run into this again. The kicker though is that VM becomes useless since I can't deleted any images.
Just out of curiosity, what's the storage driver?
e.g. aufs
, devicemapper
, btrfs
aufs
Any other ideas @nathanleclaire?
same problem here :-(
Each docker-compose up consumes approximately 2% of /dev/sda1
This did the trick, thank you @iamKurt
docker volume rm $(docker volume ls --filter dangling=true -q)
I just created a new Docker machine VM using:
When I tried running a mongo DB container on this newly created VM, I got an error message about not having enough space. Attaching to the container, I can see this:
How do I go about cleaning this out? What is stored in these directories?