Open 1arrcy1 opened 2 years ago
I think the issue you're seeing is because you're using the vfs
storage driver; the vfs
storage driver does not support copy-on-write, and doesn't provide a way to share files between layers.
Effectively, each layer is stored as a directory containing a full copy of the layer and all its parent layers, and each container runs with a full copy of the image. Because of the above, the vfs
storage driver should only be used as a last resort (if none of the other storage drivers can be used), and mostly for testing purposes.
The lack of file sharing between layers makes storage very unefficient, which is amplified if an image contains many layers. For example, an image produced by the following Dockerfile will, when stored using the vfs
storage driver, take (roughly) 3x the size of the alpine
image;
FROM alpine
RUN echo "hello" > foo.txt
RUN echo "world" >> foo.txt
Giving that a spin on a fresh daemon (no other images stored);
docker build -t myimage -<<'EOF'
FROM alpine
RUN echo "hello" > foo.txt
RUN echo "world" >> foo.txt
EOF
# clean up build-cache
docker builder prune
So, while the "history" of the image shows the number of bytes added in each layer...
docker image history myimage
IMAGE CREATED CREATED BY SIZE COMMENT
d89db0ca393e 2 minutes ago RUN /bin/sh -c echo "world" >> foo.txt # bui… 12B buildkit.dockerfile.v0
<missing> 2 minutes ago RUN /bin/sh -c echo "hello" > foo.txt # buil… 6B buildkit.dockerfile.v0
<missing> 11 days ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B
<missing> 11 days ago /bin/sh -c #(nop) ADD file:2a949686d9886ac7c… 5.54MB
... the vfs
driver stores the cumulative size of each; looking in its storage directory, you can see 3 full copies of the alpine image;
du -s /var/lib/docker/vfs/dir/*
6052 /var/lib/docker/vfs/dir/c281ad09d51abbae1e57b2b2c00d158bf5e3bb7d13377169961671bb6437d1f7
6056 /var/lib/docker/vfs/dir/ozr2g908tidoglcqrn7dl1na5
6056 /var/lib/docker/vfs/dir/taamt3cmd6e8hm887bsxbiops
One of which is the layer from the alpine image, and the second and third contain the foo.txt
file;
cat /var/lib/docker/vfs/dir/ozr2g908tidoglcqrn7dl1na5/foo.txt
hello
cat /var/lib/docker/vfs/dir/taamt3cmd6e8hm887bsxbiops/foo.txt
hello
world
Description Docker fills up hard drive even tho the docker image it self is 1 gb, it fills up to 70 GB. Im thinking it's redowloading something over and over till the drive get's filled up.
Steps to reproduce the issue:
Expected Results: Expected was that it would download these image's in about 2 min per image and not that they would fill up the entire drive+ take for ever to extract
Actual Results: Downloading a docker image cause's a 100 GB drive to fill up. It seems to be continuing to fill up the entire drive even when the docker layers are already downloaded, it gets to extracting the layers. After approximately an hour the drive get's filled up and i receive an error prompt that says: the drive is full. After this the docker pull fails and clears up the drive with 20/30 GB+ not being being consistent.
Additional information you deem important (e.g. issue happens only occasionally): Any future requested information will be posted, also im not sure if this is a docker bug. It might also be a bug with lxc or proxmox, any insights on how i can possibly solve it is greatly appreciated.
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.): What am i using:
LXC config: arch: amd64 cores: 3 features: nesting=1 hostname: Kasm memory: 4096 net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.2.1,hwaddr=32:41:2F:73:C9:AB,ip=192.168.2.103/24,tag=10,type=veth ostype: ubuntu parent: working rootfs: storage-nvme:subvol-121-disk-0,size=124G swap: 1024
Proxmox Packaging versions proxmox-ve: 7.2-1 (running kernel: 5.15.39-3-pve) pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85) pve-kernel-5.15: 7.2-8 pve-kernel-helper: 7.2-8 pve-kernel-5.13: 7.1-9 pve-kernel-5.11: 7.0-10 pve-kernel-5.4: 6.4-4 pve-kernel-5.15.39-3-pve: 5.15.39-3 pve-kernel-5.15.35-1-pve: 5.15.35-3 pve-kernel-5.13.19-6-pve: 5.13.19-15 pve-kernel-5.13.19-2-pve: 5.13.19-4 pve-kernel-5.11.22-7-pve: 5.11.22-12 pve-kernel-5.4.124-1-pve: 5.4.124-1 pve-kernel-5.4.34-1-pve: 5.4.34-2 ceph-fuse: 14.2.21-1 corosync: 3.1.5-pve2 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown: 0.8.36+pve1 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.24-pve1 libproxmox-acme-perl: 1.4.2 libproxmox-backup-qemu0: 1.3.1-1 libpve-access-control: 7.2-4 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.2-2 libpve-guest-common-perl: 4.1-2 libpve-http-server-perl: 4.1-3 libpve-storage-perl: 7.2-7 libqb0: 1.0.5-1 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 5.0.0-3 lxcfs: 4.0.12-pve1 novnc-pve: 1.3.0-3 proxmox-backup-client: 2.2.5-1 proxmox-backup-file-restore: 2.2.5-1 proxmox-mini-journalreader: 1.3-1 proxmox-widget-toolkit: 3.5.1 pve-cluster: 7.2-2 pve-container: 4.2-2 pve-docs: 7.2-2 pve-edk2-firmware: 3.20210831-2 pve-firewall: 4.2-5 pve-firmware: 3.5-1 pve-ha-manager: 3.4.0 pve-i18n: 2.7-2 pve-qemu-kvm: 6.2.0-11 pve-xtermjs: 4.16.0-1 qemu-server: 7.2-3 smartmontools: 7.2-pve3 spiceterm: 3.2-2 swtpm: 0.7.1~bpo11+1 vncterm: 1.7-1 zfsutils-linux: 2.1.5-pve1
Installation method: https://docs.docker.com/engine/install/ubuntu/
Extra: Please watch this https://youtu.be/2rmmhF3kb1I It shows what the actual problem is