moby / moby

The Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
https://mobyproject.org/
Apache License 2.0
68.48k stars 18.63k forks source link

Docker does not free up disk space after container, volume and image removal #21925

Open stouf opened 8 years ago

stouf commented 8 years ago

Versions & co

Docker

Docker version

$ docker version
Client:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:19:00 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:19:00 UTC 2015
 OS/Arch:      linux/amd64

Docker info:

$ docker info
Containers: XXX
Images: XXX
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: XXX
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.0-26-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 1
Total Memory: XXX GiB
Name: XXX
ID: XXXX:XXXX:XXXX:XXXX

Operating system

Linux 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Issue

Here is how I currently deploy my application:

  1. Build a new image based on a new version of my application code
  2. Up a new container based on the image created in 1
  3. Remove the previous container and its volume with the command docker rm -v xxxxx
  4. Remove all the unused images with docker rmi $(docker images -q)

However, little by little, I'm running out of disk space. I made sure I don't have any orphan volumes, unused containers and images, etc...

I found a post on a forum telling the following

It's a kernel problem with devicemapper, which affects the RedHat family of OS (RedHat, Fedora, CentOS, and Amazon Linux). Deleted containers don't free up mapped disk space. This means that on the affected OSs you'll slowly run out of space as you start and restart containers.

The Docker project is aware of this, and the kernel is supposedly fixed in upstream (https://github.com/docker/docker/issues/3182).

My machine being a Linux hosted on AWS, I wonder if the kernel I'm using could be related to the issue I referenced above ? If not, does any one has an idea about what could be the origin of this problem ? I spent the whole day looking for a solution, but could not find any so far :(

thaJeztah commented 8 years ago

Did you previously run using a different storage driver? If you did, it's possible that /var/lib/docker still contains files (images/containers) from the old storage driver.

Note that the devicemapper issue should not be related to your situation, because according to your docker info, you're using aufs, not devicemapper.

stouf commented 8 years ago

Did you previously run using a different storage driver?

Nop, it has always been AUFS.

Note that the devicemapper issue should not be related to your situation, because according to your docker info, you're using aufs, not devicemapper.

Yep, I realized after posting here that issue is only related to Devisemapper, sorry ^^

thaJeztah commented 8 years ago

Might be worth checking if it's actually /var/lib/docker that's growing in size / taking up your disk space, or a different directory. Note; to remove unused ("dangling") images, you can docker rmi $(docker images -aq --filter dangling=true)

stouf commented 8 years ago

Might be worth checking if it's actually /var/lib/docker that's growing in size / taking up your disk space, or a different directory.

Yep, I already confirmed that :( To be more accurate, the folders growing in size are /var/lib/docker/aufs/diff and /var/lib/docker/aufs/mnt. The size of any other folder under /var/lib/docker is not really significant.

Note; to remove unused ("dangling") images, you can docker rmi $(docker images -aq --filter dangling=true)

Thanks. I'm already doing that. On each deployment, I:

  1. remove any exited containers with the -v option to also remove the associated volume
  2. remove all the unused images through that command.

Which is the reason why I don't understand why my disk space is decreasing over time :(

thaJeztah commented 8 years ago

Do the daemon logs show anything interesting (e.g. Docker failing to remove containers?). You've X-ed the amount of containers and images in your output, is that number going down after your cleanup scripts have run? Also note that you're running an outdated version of docker; if you want to stay on docker 1.8.x, you should at least update to docker 1.8.3 (which contains a security fix)

stouf commented 8 years ago

Do the daemon logs show anything interesting (e.g. Docker failing to remove containers?)

No, everything seems to be normal. Plus, I keep loosing disk space while containers are up and running and without even deploying new containers.

You've X-ed the amount of containers and images in your output, is that number going down after your cleanup scripts have run?

Ah yeah, sorry for X-ing those numbers. They don't change at all, as I always deploy the same containers and clean the old ones each time I deploy. So, the number of containers and number of images remain the same as expected.

Also note that you're running an outdated version of docker; if you want to stay on docker 1.8.x, you should at least update to docker 1.8.3 (which contains a security fix)

Yep, I'm better update, indeed. I was planning on updating to the latest version soon, but I will have to do it in the next 48 hours because my server is now running out of disk space :( After the update, I'll keep monitoring the disk space everyday and report my observations here. I really hope it's just a version problem.

stouf commented 8 years ago

Hi guys,

Update to Docker 1.10 done. I used another instance to deploy my infra on top on Docker v1.10. Thus, I took that chance to investigate a little deeper on this space disk issue on the old server; the problem came from something within my infra, unrelated to Docker containers... Sorry for bothering :(

thaJeztah commented 8 years ago

@stouf good to hear you resolved your issue

stouf commented 8 years ago

Thanks a lot for the support :)

awolfe-silversky commented 8 years ago

This issue and https://github.com/docker/docker/issues/3182 are marked as closed. However just today another user reported the problem remains. Please investigate.

stouf commented 8 years ago

@awolfe-silversky Could you please describe the issue? As I said above, my problem wasn't related to containers or Device Mapper. It was a container in my infrastructure silently generating tons of logs that were never removed.

groyee commented 7 years ago

I have the same issue.

I stoped all docker containers, however when I run this command:

sudo lsof -nP | grep '(deleted)'

I get:

screen shot 2016-10-19 at 2 09 47

ONLY when I do sudo service docker restart, only then it frees the space.

Here is the best picture to describe it:

screen shot 2016-10-19 at 2 15 11
stouf commented 7 years ago

@groyee I gave it a try on my side and had the same results; I only got 500MB freed by restarting the Docker daemon, but I have less than 10 containers running on the server I was testing. I think we should create a new dedicated issue, as it seems to be different from what this issue was originally about.

gsccheng commented 7 years ago

I have a similar problem where clearing out my volumes, images, and containers did not free up the disk space. I traced the culprit to this file, which is 96 gb. /Users/MyUserAccount/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2

However, looks like this is a known issue for Macs: https://github.com/docker/for-mac/issues/371 https://github.com/docker/docker/issues/23437

Zokormazo commented 7 years ago

I'm suffering similar problem on debian jessie. I freed ~400MB with a service restart, but have 2,1GB of old container garbage inside /var/lib/docker/aufs with just one container running

mbana commented 7 years ago

confirming this issue. can you folks at least not attach a warning when you start taking up too much space. i do something like this and it becomes very noticeable fairly quickly what the issue is:

function usagesort {
  local dir_to_list="$1"
  cd "$dir_to_list"
  du -h -d 1 | sort -k 1,1 -g
}
...
$ usagesort "$HOME/Library/Containers" | grep -i docker
43G ./com.docker.docker
276K    ./com.docker.helper

is there an official work-around ot this issue or better yet when are you planning to actually fix it?

thaJeztah commented 7 years ago

@mbana @gsccheng on OS X, that's unrelated to the issue reported here, and specific to Docker for Mac, see https://github.com/docker/for-mac/issues/371

caneraydinbey commented 7 years ago

What is the solution here?

root@vegan:/var/lib/docker# du -shc *|grep "G"|sort -n 29G aufs 135G containers 164G total root@vegan:/var/lib/docker# cd containers/ root@vegan:/var/lib/docker/containers# du -shc *|grep "G"|sort -n 134G 11a36e593a91c4677482ec49e7asfasfasf0e306732c16073d0c241a82acfa325bf03a1a 135G total

HWiese1980 commented 7 years ago

Is there already a solution for this issue?

root@xxx:/var/lib/docker# du -shc *
84G aufs
4,0K    containers
2,6M    image
80K network
4,0K    nuke-graph-directory.sh
20K plugins
4,0K    swarm
4,0K    tmp
4,0K    tmp-old
4,0K    trust
36K volumes

/var/lib/docker/aufs takes a damn lot of space on my disk. There are no images and containers left anymore:

root@xxx:/var/lib/docker# docker images -a
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
root@xxx:/var/lib/docker# 

root@xxx:/var/lib/docker# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
root@xxx:/var/lib/docker# 

I don't get rid of it... without manually deleting it which I'm afraid of doing because I don't know what of that data is still needed.

thaJeztah commented 7 years ago

@HWiese1980 docker (up until docker 17.06) removed containers when docker rm --force was used; even if there was an issue with removing the actual layers (which could happen if the process running in a container was keeping the mount busy); as a result, those layers got "orphaned" (docker no longer had a reference to them), thus got left around.

docker 17.06 and up will (in the same situation), keep the container registered (in "dead" state), which allows you to remove the container (and layers) at a later stage.

However if you've been running older versions of docker, and have a cleanup script that uses docker rm -f, chances are those layers accumulated over time. You can choose to do a "full" cleanup (you'll loose all your local images, volumes, and containers, so only do this if there's no important information currently) to do so, stop the docker service, and rm -rf /var/lib/docker. Alternatively, you can stop the docker service, move the directory (as a backup), and start the service again.

In your situation, it looks like there's no (or very little) data in the volumes directory, so if there's no images or containers on your host, it may be "safe" to just remove the /var/lib/docker directory.

eoglethorpe commented 7 years ago

I can't add anything too intelligent to this but after a good amount of build testing my local storage became full so I tried to delete all images and containers and they were gone from Docker however the space wasn't reclaimed.

/var/lib/docker/ was the main culprit and consuming my disk space. I'm on 17.06.1-ce, build 874a737... not sure if I can provide anything else

tshirtman commented 7 years ago

I think i got hit by the same thing, installed docker earlier today on this new laptop, so it was clean before, and built a few images to test, getting low on space, i took care on calling docker rm on any stopped docker (produced by my builds, never used -f to remove them), and then docker rmi on all untagged images, currently i have this

gabriel@gryphon:~> sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
buildozer           latest              fbcd2ca47e0b        3 hours ago         4.19GB
ubuntu              17.04               bde41be8de8c        4 weeks ago         93.1MB
19:22:44 18/08/17 red argv[1] 100% 59
gabriel@gryphon:~> sudo df -h /var/lib/docker/aufs
Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur
/dev/nvme0n1p5     114G    111G     0 100% /var/lib/docker/aufs
19:23:08 18/08/17 red argv[1] 100% 25
gabriel@gryphon:~> sudo du -sh /var/lib/docker/aufs/diff
59G /var/lib/docker/aufs/diff
19:23:25 18/08/17 red argv[1] 100% 6115
gabriel@gryphon:~> sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
buildozer           latest              fbcd2ca47e0b        3 hours ago         4.19GB
ubuntu              17.04               bde41be8de8c        4 weeks ago         93.1MB
19:23:30 18/08/17 red argv[1] 100% 46
gabriel@gryphon:~> sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
19:23:33 18/08/17 red argv[1] 100% 43
gabriel@gryphon:~> sudo ls /var/lib/docker/aufs/diff|head
04fd10f50fe1d74a489268c9b2df95c579eb34c214f9a5d26c7077fbc3be0df4-init-removing
04fd10f50fe1d74a489268c9b2df95c579eb34c214f9a5d26c7077fbc3be0df4-removing
050edba704914b8317f0c09b9640c9e2995ffa403640a37ee77f5bf219069db3
059f9eee859b485926c3d60c3c0f690f45b295f0d499f188b7ad417ba8961083-init-removing
059f9eee859b485926c3d60c3c0f690f45b295f0d499f188b7ad417ba8961083-removing
09425940dd9d3e7201fb79f970d617c45435b41efdf331a5ad064be136d669b2-removing
0984c271bf1df9d3b16264590ab79bee1914b069b8959a9ade2fb93d8c3d1d9b-init-removing
0984c271bf1df9d3b16264590ab79bee1914b069b8959a9ade2fb93d8c3d1d9b-removing
0b082b302e8434d4743eb6e0ba04076c91fbd7295cc524653b2d313186d500fa-removing
0b11febcb2332657bd6bb3feedd404206c780e65bc40d580f9f4a77eb932d199-init-removing
19:23:57 18/08/17 red argv[1] 100% 35
gabriel@gryphon:~> sudo ls /var/lib/docker/aufs/diff|wc -l
256

already restarted docker, didn't change anything, i think i'll remove everything ending with -removing in the diff/ directory, thankfully nothing important depends on the docker images in this laptop, but still, wouldn't like for this to happen on a server.

eoglethorpe commented 7 years ago

I'm wondering if a cause could be large static files I'm copying into my containers (just a guess)

On Fri, Aug 18, 2017 at 11:17 PM Gabriel Pettier notifications@github.com wrote:

I think i got hit by the same thing, installed docker earlier today on this new laptop, so it was clean before, and built a few images to test, getting low on space, i took care on calling docker rm on any stopped docker (produced by my builds), and then docker rmi on all untagged images, currently i have this

gabriel@gryphon:~> sudo docker images REPOSITORY TAG IMAGE ID CREATED SIZE buildozer latest fbcd2ca47e0b 3 hours ago 4.19GB ubuntu 17.04 bde41be8de8c 4 weeks ago 93.1MB 19:22:44 18/08/17 red argv[1] 100% 59 gabriel@gryphon:~> sudo df -h /var/lib/docker/aufs Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur /dev/nvme0n1p5 114G 111G 0 100% /var/lib/docker/aufs 19:23:08 18/08/17 red argv[1] 100% 25 gabriel@gryphon:~> sudo du -sh /var/lib/docker/aufs/diff 59G /var/lib/docker/aufs/diff 19:23:25 18/08/17 red argv[1] 100% 6115 gabriel@gryphon:~> sudo docker images REPOSITORY TAG IMAGE ID CREATED SIZE buildozer latest fbcd2ca47e0b 3 hours ago 4.19GB ubuntu 17.04 bde41be8de8c 4 weeks ago 93.1MB 19:23:30 18/08/17 red argv[1] 100% 46 gabriel@gryphon:~> sudo docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 19:23:33 18/08/17 red argv[1] 100% 43 gabriel@gryphon:~> sudo ls /var/lib/docker/aufs/diff|head 04fd10f50fe1d74a489268c9b2df95c579eb34c214f9a5d26c7077fbc3be0df4-init-removing 04fd10f50fe1d74a489268c9b2df95c579eb34c214f9a5d26c7077fbc3be0df4-removing 050edba704914b8317f0c09b9640c9e2995ffa403640a37ee77f5bf219069db3 059f9eee859b485926c3d60c3c0f690f45b295f0d499f188b7ad417ba8961083-init-removing 059f9eee859b485926c3d60c3c0f690f45b295f0d499f188b7ad417ba8961083-removing 09425940dd9d3e7201fb79f970d617c45435b41efdf331a5ad064be136d669b2-removing 0984c271bf1df9d3b16264590ab79bee1914b069b8959a9ade2fb93d8c3d1d9b-init-removing 0984c271bf1df9d3b16264590ab79bee1914b069b8959a9ade2fb93d8c3d1d9b-removing 0b082b302e8434d4743eb6e0ba04076c91fbd7295cc524653b2d313186d500fa-removing 0b11febcb2332657bd6bb3feedd404206c780e65bc40d580f9f4a77eb932d199-init-removing 19:23:57 18/08/17 red argv[1] 100% 35 gabriel@gryphon:~> sudo ls /var/lib/docker/aufs/diff|wc -l 256

already restarted docker, didn't change anything, i think i'll remove everything ending with -removing in the diff/ directory, thankfully nothing important depends on the docker images in this laptop, but still, wouldn't like for this to happen on a server.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/moby/moby/issues/21925#issuecomment-323413821, or mute the thread https://github.com/notifications/unsubscribe-auth/AJNtJJ5rj0ILsCrezsuxQbJ00CrIhgZ6ks5sZcqjgaJpZM4IEOOm .

stouf commented 7 years ago

Have you tried docker system prune? Also, when you remove a container, do you use the -v option? It seems that volumes are not removed by default when containers are removed via docker rm.

MartinThoma commented 7 years ago

I have a similar issue: https://stackoverflow.com/q/45798076/562769

daytonpa commented 7 years ago

I have found a work-around in the meantime. It's a little tedious, but it clears up space on my Ubuntu 16.04 VM. Essentially... performing a "double-tap" on the system. Running as root or any system sudoer:

docker rm $(docker ps -a -q)
docker rmi --force $(docker images -q)
docker system prune --force

# Double-tap
systemctl stop docker
rm -rf /var/lib/docker/aufs
apt-get autoclean
apt-get autoremove
systemctl start docker
nkmittal commented 7 years ago

Don't ever remove /var/lib/docker/aufs. It will completely make your existing containers useless unless you really want to.

dmorawetz commented 7 years ago

This issues should be reopened. I can also report a cluttered /var/lib/docker/aufs directory.

stouf commented 7 years ago

I'm re-opening the issue considering the "recent" activity.

dmorawetz commented 7 years ago

I can also report many *-removing files in the diffs and layers folder just like in the ls output from @tshirtman. Looks like docker can't remove those files when pulling / updating images or creating containers even though I stop all containers before updating. See https://stackoverflow.com/a/45798794

Maybe this issue should be moved to the docker side?

amq commented 7 years ago

Just hit this on a server and laptop, both running Ubuntu 16.04 and Docker 17.06.

30 GB in /var/lib/docker/aufs, while the actual images and volumes account just for 5 GB. Running docker system prune -af freed up only 120 MB.

Is it safe to remove all containers and /var/lib/docker/aufs/*, then recreate the containers? Will named volumes stay intact?

Anyhow, we need a simple workaround that can safely remove all the clutter and can be set in cron.

SeanTobin commented 7 years ago

Had the same issue on 17.06.1-ce build 874a737; used over 600gb in /var/lib/docker/aufs with no images in docker ps -a or docker images -a. Deleting /var/lib/docker/aufs and reinstalling docker resolved the issue at the time, but I can see the folder growing disproportionately to the containers available. This is on a CI server with heavy docker utilization.

osiyuk commented 7 years ago

Have the same issue. My total docker images weight about 2G...

$ docker images -a
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
php-mysql           latest              98cd5987c04b        About an hour ago   496MB
<none>              <none>              4449742044fb        About an hour ago   496MB
<none>              <none>              72b86fc83309        About an hour ago   195MB
<none>              <none>              37d87e71c294        About an hour ago   195MB
<none>              <none>              0ca1e87a5bd7        About an hour ago   195MB
test                latest              c5cb074fcda8        8 days ago          195MB
hello-world         latest              1815c82652c0        2 months ago        1.84kB
centos              6.9                 573de66f263e        4 months ago        195MB

... no containers (I have deleted all of them) and docker aufs uses as much as 5G

$ sudo du -sh /var/lib/docker
4,8G    /var/lib/docker
$ sudo du -hd 1 /var/lib/docker/aufs/diff | sort -k 1 -g | head
1,1G    /var/lib/docker/aufs/diff/438c523673f09289e900d4e6885829ece4fc17231e0503be9635638278ecffb6-removing
1,1G    /var/lib/docker/aufs/diff/6986454d252cc0486a68bf3600628e640f40e19b9e587fdb4ee7e9ef13eae8e5-removing
1,1G    /var/lib/docker/aufs/diff/6ec32a4a73d57714eb43148c93202188fe420c04b1cdeba22309058151d9d0f0-removing
2,1M    /var/lib/docker/aufs/diff/138b76c2c2d2cf2e2f4ac89ac1ff60f626e79c683be3455ebbf1a9fe5a986546-removing
4,0K    /var/lib/docker/aufs/diff/1666d9fc652bc7ffd48809f385b25650583a43fce421415d2ab27edc3a48fe3c-removing
4,0K    /var/lib/docker/aufs/diff/3a3e0fcee5e23dd462a2282fc7d46b3eb20130cacedbe9d4a36f7ed8a30fae87-removing
4,0K    /var/lib/docker/aufs/diff/5a217be9adef3314f7637998ee2ec16df0119f0b7aa6be13b0dc7f378b7871e7-removing
4,0K    /var/lib/docker/aufs/diff/6ee5b1c9000951e90a699b34f811f2340dae93f30b441b9228f5139e4de614e3-removing
4,0K    /var/lib/docker/aufs/diff/75c688cd9b68f8c9c851aec391b537b88e67d17942e5ad57c7e10c3f964e7c3d-removing
4,0K    /var/lib/docker/aufs/diff/7827981b7cff588c52272790070732ed9b062b975c2ec05fa4e1d4382b72b004-removing
amq commented 7 years ago

https://github.com/moby/moby/issues/22207

Lewiscowles1986 commented 7 years ago

This is also true of ubuntu, it's not a fedora or redhat problem, it's a docker problem...

osiyuk commented 7 years ago

Here is useful command to measure your disk usage by docker garbage: sudo du -hd 1 /var/lib/docker/aufs/diff | sort -hrk 1 | head -20 My output is

7,9G    /var/lib/docker/aufs/diff
1,1G    /var/lib/docker/aufs/diff/d2105e0a09860fe804e211e7ae6d6988441091185413ba55fa4e76f8330c6d8b-removing
1,1G    /var/lib/docker/aufs/diff/6ec32a4a73d57714eb43148c93202188fe420c04b1cdeba22309058151d9d0f0-removing
1,1G    /var/lib/docker/aufs/diff/6986454d252cc0486a68bf3600628e640f40e19b9e587fdb4ee7e9ef13eae8e5-removing
1,1G    /var/lib/docker/aufs/diff/438c523673f09289e900d4e6885829ece4fc17231e0503be9635638278ecffb6-removing
844M    /var/lib/docker/aufs/diff/e1c60f150fe82a4f8decdf5cac0881e637ce5c9a10f42176517f1ae2373fc85e-removing
844M    /var/lib/docker/aufs/diff/1bea8eb73c48b16949e045bd7143a1eea262ddfb40b45f515697de0a6498675d-removing
398M    /var/lib/docker/aufs/diff/6273ab81b99c58d7e58d3e08e81b3fb1afe96d2e58e3a03127eddb0b7f41e894-removing
357M    /var/lib/docker/aufs/diff/9570dfeca1d58076c7868a2ee8046074b59c6d51d3aaefd6b055301f4aeeff24
356M    /var/lib/docker/aufs/diff/d3d3966ce761969f06ae4611bd3b949a1b0b8c10145f6b9e78d84eb6c6cb42b8-removing
209M    /var/lib/docker/aufs/diff/33e6e61a4e402654dd27472b1968f13526aeef48d776360fceecbcd6712f7658
206M    /var/lib/docker/aufs/diff/fa35f58b29cb757d475eca0ebe08c451abe38fa820c321019c708797d4ec8068-removing
206M    /var/lib/docker/aufs/diff/aa77ab0cb20d9f209c19022fc5ccae48929c16a6f171af6dc51ab8f5ec8e4d55-removing
132M    /var/lib/docker/aufs/diff/969d6dce605c46207540d8408fc3b6674ff3e4f4b3baee9bc4aa1dd2f87e4246-removing
131M    /var/lib/docker/aufs/diff/e7fc1c1a732ca6dd10b01304dfaeaec1d640f13e976724458a3178d9d3faa74c-removing
52M /var/lib/docker/aufs/diff/5b1abfcd8f8215d9033113c3e816ee5718e9daf952d258d1d9eb1d21c1b3198c-removing
2,1M    /var/lib/docker/aufs/diff/138b76c2c2d2cf2e2f4ac89ac1ff60f626e79c683be3455ebbf1a9fe5a986546-removing
112K    /var/lib/docker/aufs/diff/eaf09f7a20745d59831888228a97774d6947e4930acc0182a73bfa22c00750bc-removing
60K /var/lib/docker/aufs/diff/3ff477ee351b57a115312787249f7d7362f58247fcc357278cae30814754c0c5-removing
36K /var/lib/docker/aufs/diff/fa35f58b29cb757d475eca0ebe08c451abe38fa820c321019c708797d4ec8068-init-removing

Here one of the possible solutions

osiyuk commented 7 years ago

docker-gc

jcberthon commented 7 years ago

I've tried all of the above tip, I even stopped and removed all containers, volumes and images. Docker info displayed:

docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0

My /var was still full, not a single byte freed eventhough some docker commands showed things like Total reclaimed space: 958MB. Restarting the daemon did not help. I was fed-up, I deleted the /var/lib/docker folder, and I am now changing from the buggy aufs driver to something else.

I'm running Ubuntu 16.04.3 (Kernel 4.10), Docker CE 17.06.1, the partition scheme is an encrypted LVM volume in which I have a /var LVM formatted as ext4. Docker used the aufs driver by default.

osiyuk commented 7 years ago

@jcberthon Did you tried?

Here is useful command to measure your disk usage by docker garbage: sudo du -hd 1 /var/lib/docker/aufs/diff | sort -hrk 1 | head -20

https://gist.github.com/osiyuk/12b223532eb8ac21b25283159c3147b1

amq commented 7 years ago

Guys, simply upgrade to v17.06.2-ce-rc1 or v17.07.0-ce: https://download.docker.com/linux/ubuntu/dists/xenial/pool/test/amd64/

osiyuk commented 7 years ago
$ sudo apt-get upgrade docker-ce
Reading package lists... Done
Building dependency tree       
Reading state information... Done
docker-ce is already the newest version (17.06.1~ce-0~debian).
$ docker -v
Docker version 17.06.1-ce, build 874a737
alhails commented 7 years ago

@osiyuk you can install from the deb file directly rather than via apt-get upgrade as package not yet updated in repository.

jcberthon commented 7 years ago

@osiyuk too late, I already deleted the complete /var/lib/docker folder, it was anyway a new dev laptop, so that was the path with least frictions here 🙂

Just for info, after removing all containers, images and volumes that /var/lib/docker/aufs/diff was still 18GB (I did run du -sh /var/lib/docker/aufs/diff before I deleted it). I'm now using overlay2 storage, that problem seems gone. Maybe other awaits 😉

Thanks @amq too late as well, I'm now using overlay2, so I do not need the patch. From this issue, it was not obvious that a MR was published and merged, and also not obvious that it would be corrected in versions 17.06.2 or 17.07 (no milestones defined).

Lewiscowles1986 commented 7 years ago

@amq I think the problem is that is in testing. For now most of us have fixes, but would like to not have to run them daily.

I Have switched to using overlayFS as per https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/#configure-docker-with-the-overlay-or-overlay2-storage-driver

{
    "storage-driver": "overlay2",
    "storage-opts": [
        "overlay2.override_kernel_check=true"
    ]
}

The reason I've written the steps like this is that they can be scripted, which would lead to less problems with anyone running them (check $? after every step).

osiyuk commented 7 years ago

@jcberthon There are big differences between du -sh and du -hd 1. You don`t see the full picture of what is happening with docker daemon and aufs.

Congratulations for switching to another storage engine.

stephencookefp commented 7 years ago

I am seeing large issues with space below /var/lib/docker/aufs/diff

Has a fix been released or is there a workaround using a different Storage Driver.

15G /var/lib/docker/aufs/diff

Which is a bit of a joke for running four containers. How can I get this space back?

Lewiscowles1986 commented 7 years ago

@stephencookefp if you can afford to lose the containers without fuss I've got the following in my .bashrc / .profile

alias docker_rm_all="docker rm \`docker ps -a -q\`"
alias docker_rmi_all="docker rmi \`docker images -q\`"
alias docker_rmi_dangling="docker rmi \`docker images -qa -f 'dangling=true'\`"
alias docker_murder_aufs="sudo service docker stop && sudo rm -rf /var/lib/docker/{aufs,image/aufs,linkgraph.db} && sudo service docker start"

I then run docker_rm_all && docker_rmi_all && docker_murder_aufs

If you don't have images or containers you don't need the docker_rm_all or docker_rmi_all

stephencookefp commented 7 years ago

While the above would be fine if I was using this system wide I am using a pipeline to test stuff.

So I was looking to call docker system purge -a -f after every docker-compose up and down.

I will of course be looking to build a local Proxy for this use.

Issue I have is that the purge option works fine on my home LAB with is EL7(Centos) using overlay type.

The purge dont seem to work on aufs for me. on a per container run. on ubuntu16 About to give the server purge approach see that works with aufs, if not am looking to drop this type and go back to using overlay. as that works ( at least on el7)

Lewiscowles1986 commented 7 years ago

I only used it until above I saw someone say they switched to overlay on local, that seems to have fixed it for now.

stephencookefp commented 7 years ago

I am just testing overlay and it works but it is gonna batter your network. I am looking into creating a local proxy for the images so the hit is on the LAN not the NET. ub16 using overlay2 is now being tested by me. I know el7 overlay works