Closed supereagle closed 7 years ago
Shell cleanup command:
cd "/proc/`cat /var/run/docker.pid`/fd" | ls -li | grep '\(deleted\)' | xargs -n 1 -d '\n' -P 8 -I [] bash -c 'find . -inum `echo "[]" | sed -e "s/^[[:space:]]*//" | cut -d " " -f 1` -exec ls -l {} \; -exec truncate -s 0 {} \;'
This shell command is very slow, it will take 20+ minutes to cleanup 4W files. Following Python script only needs 30 seconds.
Python cleanup script:
#!/usr/bin/env python
# clean deleted file open by docker
import os
import os.path
import sys
pid = open('/var/run/docker.pid', 'r').read()
print 'clean deleted file open by docker: ' + pid
os.chdir('/proc/%s/fd'%pid)
count = 0
for f in os.listdir('.'):
if not os.path.islink(f):
continue
if '(deleted)' in os.readlink(f):
with open(f, 'w') as of:
count += 1
of.truncate(0)
print 'clean %d files'%count
Configuration for log driver :
--log-driver --log-opt json-file max-size=2m --log-opt max-file=5
The rotated json log files are deleted by Docker, but the
fd
of these files are still kept by Docker, so their disk size can not be released otherwise restart the Docker.The number of kept
fd
is very large, if it larger then the config--default-ulimit nofile=131072
of Docker, the Docker can not work normally.Illustration : Total number of kept
fd
is 11405, number of keptfd
for one container is 4882.